Test Report: Docker_Linux_containerd_arm64 19337

                    
                      a9f4e4a9a8ef6f7d1064a3bd8285d9113f3d3767:2024-07-29:35545
                    
                

Test fail (2/336)

Order failed test Duration
38 TestAddons/serial/Volcano 199.7
311 TestStartStop/group/old-k8s-version/serial/SecondStart 385.55
x
+
TestAddons/serial/Volcano (199.7s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 43.920275ms
addons_test.go:905: volcano-admission stabilized in 45.181185ms
addons_test.go:913: volcano-controller stabilized in 45.728449ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-lrnvd" [c1660426-14b8-411f-adf8-71daaef61f33] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003662482s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-2zrfw" [3fd92b96-3167-462a-bdfb-19f2a95be7cb] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.005183562s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-z85mm" [c277719c-eb79-4776-8475-88e1e37801df] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003347824s
addons_test.go:932: (dbg) Run:  kubectl --context addons-299185 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-299185 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-299185 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [1bcb79a4-51cf-41fa-9353-7b377e1949b3] Pending
helpers_test.go:344: "test-job-nginx-0" [1bcb79a4-51cf-41fa-9353-7b377e1949b3] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-299185 -n addons-299185
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-07-29 10:29:47.605624654 +0000 UTC m=+385.365860494
addons_test.go:964: (dbg) Run:  kubectl --context addons-299185 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-299185 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-2a31c86d-c607-4d15-8b93-fda8cf897525
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2pbrd (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-2pbrd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-299185 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-299185 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-299185
helpers_test.go:235: (dbg) docker inspect addons-299185:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fd664f31da551feab5a9f552c186d4211521f77e3c3af2c11f4702b4f6a0729d",
	        "Created": "2024-07-29T10:24:14.808545809Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 2911290,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-29T10:24:14.945296298Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/fd664f31da551feab5a9f552c186d4211521f77e3c3af2c11f4702b4f6a0729d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fd664f31da551feab5a9f552c186d4211521f77e3c3af2c11f4702b4f6a0729d/hostname",
	        "HostsPath": "/var/lib/docker/containers/fd664f31da551feab5a9f552c186d4211521f77e3c3af2c11f4702b4f6a0729d/hosts",
	        "LogPath": "/var/lib/docker/containers/fd664f31da551feab5a9f552c186d4211521f77e3c3af2c11f4702b4f6a0729d/fd664f31da551feab5a9f552c186d4211521f77e3c3af2c11f4702b4f6a0729d-json.log",
	        "Name": "/addons-299185",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-299185:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-299185",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/ead7281a6310a514dbd41d79a25a3c4fc9f350c904c8767481c5c8dfee7e26ff-init/diff:/var/lib/docker/overlay2/b09444c3e24393d9bf23bfbe615192567d3e49b78ae04c34cc2ea1bd8f080cde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ead7281a6310a514dbd41d79a25a3c4fc9f350c904c8767481c5c8dfee7e26ff/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ead7281a6310a514dbd41d79a25a3c4fc9f350c904c8767481c5c8dfee7e26ff/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ead7281a6310a514dbd41d79a25a3c4fc9f350c904c8767481c5c8dfee7e26ff/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-299185",
	                "Source": "/var/lib/docker/volumes/addons-299185/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-299185",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-299185",
	                "name.minikube.sigs.k8s.io": "addons-299185",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "226e92c117d6062d3b272ef15ac4a96bdc23071ea97682980c48e3851f34d28c",
	            "SandboxKey": "/var/run/docker/netns/226e92c117d6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36469"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36470"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36473"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36471"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36472"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-299185": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ae29c3bf37f23a53e8be505a02e8a7c0e671e64bc1bf39b16cd05667bcb91856",
	                    "EndpointID": "b099412ecbe735d46dd375cd366df48882b267956ab5188e638eca017817d6bb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-299185",
	                        "fd664f31da55"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-299185 -n addons-299185
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-299185 logs -n 25: (1.572087744s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-425957   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | -p download-only-425957              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-425957              | download-only-425957   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| start   | -o=json --download-only              | download-only-735175   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | -p download-only-735175              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-735175              | download-only-735175   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| start   | -o=json --download-only              | download-only-012491   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | -p download-only-012491              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0  |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-012491              | download-only-012491   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-425957              | download-only-425957   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-735175              | download-only-735175   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-012491              | download-only-012491   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| start   | --download-only -p                   | download-docker-578308 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | download-docker-578308               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-578308            | download-docker-578308 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| start   | --download-only -p                   | binary-mirror-323309   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | binary-mirror-323309                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36657               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-323309              | binary-mirror-323309   | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| addons  | disable dashboard -p                 | addons-299185          | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | addons-299185                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-299185          | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | addons-299185                        |                        |         |         |                     |                     |
	| start   | -p addons-299185 --wait=true         | addons-299185          | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:26 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:23:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:23:50.333548 2910803 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:23:50.333680 2910803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:23:50.333690 2910803 out.go:304] Setting ErrFile to fd 2...
	I0729 10:23:50.333695 2910803 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:23:50.333921 2910803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:23:50.334367 2910803 out.go:298] Setting JSON to false
	I0729 10:23:50.335259 2910803 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":65181,"bootTime":1722183450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 10:23:50.335326 2910803 start.go:139] virtualization:  
	I0729 10:23:50.337908 2910803 out.go:177] * [addons-299185] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0729 10:23:50.341155 2910803 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:23:50.341369 2910803 notify.go:220] Checking for updates...
	I0729 10:23:50.345727 2910803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:23:50.347699 2910803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 10:23:50.349626 2910803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 10:23:50.351318 2910803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0729 10:23:50.353518 2910803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:23:50.355705 2910803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:23:50.376072 2910803 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 10:23:50.376203 2910803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:23:50.441021 2910803 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-29 10:23:50.431575866 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:23:50.441172 2910803 docker.go:307] overlay module found
	I0729 10:23:50.443023 2910803 out.go:177] * Using the docker driver based on user configuration
	I0729 10:23:50.444628 2910803 start.go:297] selected driver: docker
	I0729 10:23:50.444652 2910803 start.go:901] validating driver "docker" against <nil>
	I0729 10:23:50.444666 2910803 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:23:50.445310 2910803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:23:50.495213 2910803 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-29 10:23:50.485899126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:23:50.495399 2910803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:23:50.495637 2910803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:23:50.497659 2910803 out.go:177] * Using Docker driver with root privileges
	I0729 10:23:50.499371 2910803 cni.go:84] Creating CNI manager for ""
	I0729 10:23:50.499389 2910803 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 10:23:50.499401 2910803 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:23:50.499491 2910803 start.go:340] cluster config:
	{Name:addons-299185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-299185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:23:50.501787 2910803 out.go:177] * Starting "addons-299185" primary control-plane node in "addons-299185" cluster
	I0729 10:23:50.503978 2910803 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0729 10:23:50.506269 2910803 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 10:23:50.508308 2910803 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0729 10:23:50.508357 2910803 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0729 10:23:50.508369 2910803 cache.go:56] Caching tarball of preloaded images
	I0729 10:23:50.508400 2910803 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 10:23:50.508452 2910803 preload.go:172] Found /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 10:23:50.508462 2910803 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
	I0729 10:23:50.508813 2910803 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/config.json ...
	I0729 10:23:50.508879 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/config.json: {Name:mk757c871f3bba1d28d054a4e32f14fd5aff0fa5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:23:50.523144 2910803 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:23:50.523271 2910803 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 10:23:50.523296 2910803 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 10:23:50.523302 2910803 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 10:23:50.523311 2910803 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 10:23:50.523320 2910803 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 10:24:07.596014 2910803 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 10:24:07.596055 2910803 cache.go:194] Successfully downloaded all kic artifacts
	I0729 10:24:07.596103 2910803 start.go:360] acquireMachinesLock for addons-299185: {Name:mke50dcf7e9faaff58c8764f4c0aed2314665803 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 10:24:07.596722 2910803 start.go:364] duration metric: took 593.023µs to acquireMachinesLock for "addons-299185"
	I0729 10:24:07.596756 2910803 start.go:93] Provisioning new machine with config: &{Name:addons-299185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-299185 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0729 10:24:07.596850 2910803 start.go:125] createHost starting for "" (driver="docker")
	I0729 10:24:07.598858 2910803 out.go:204] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0729 10:24:07.599095 2910803 start.go:159] libmachine.API.Create for "addons-299185" (driver="docker")
	I0729 10:24:07.599131 2910803 client.go:168] LocalClient.Create starting
	I0729 10:24:07.599265 2910803 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem
	I0729 10:24:07.888008 2910803 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem
	I0729 10:24:08.353629 2910803 cli_runner.go:164] Run: docker network inspect addons-299185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 10:24:08.368501 2910803 cli_runner.go:211] docker network inspect addons-299185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 10:24:08.368589 2910803 network_create.go:284] running [docker network inspect addons-299185] to gather additional debugging logs...
	I0729 10:24:08.368611 2910803 cli_runner.go:164] Run: docker network inspect addons-299185
	W0729 10:24:08.383125 2910803 cli_runner.go:211] docker network inspect addons-299185 returned with exit code 1
	I0729 10:24:08.383155 2910803 network_create.go:287] error running [docker network inspect addons-299185]: docker network inspect addons-299185: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-299185 not found
	I0729 10:24:08.383169 2910803 network_create.go:289] output of [docker network inspect addons-299185]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-299185 not found
	
	** /stderr **
	I0729 10:24:08.383260 2910803 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 10:24:08.397765 2910803 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400177de90}
	I0729 10:24:08.397808 2910803 network_create.go:124] attempt to create docker network addons-299185 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0729 10:24:08.397873 2910803 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-299185 addons-299185
	I0729 10:24:08.467141 2910803 network_create.go:108] docker network addons-299185 192.168.49.0/24 created
	I0729 10:24:08.467178 2910803 kic.go:121] calculated static IP "192.168.49.2" for the "addons-299185" container
	I0729 10:24:08.467250 2910803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 10:24:08.481202 2910803 cli_runner.go:164] Run: docker volume create addons-299185 --label name.minikube.sigs.k8s.io=addons-299185 --label created_by.minikube.sigs.k8s.io=true
	I0729 10:24:08.497038 2910803 oci.go:103] Successfully created a docker volume addons-299185
	I0729 10:24:08.497136 2910803 cli_runner.go:164] Run: docker run --rm --name addons-299185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-299185 --entrypoint /usr/bin/test -v addons-299185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 10:24:10.574452 2910803 cli_runner.go:217] Completed: docker run --rm --name addons-299185-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-299185 --entrypoint /usr/bin/test -v addons-299185:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib: (2.077268254s)
	I0729 10:24:10.574485 2910803 oci.go:107] Successfully prepared a docker volume addons-299185
	I0729 10:24:10.574505 2910803 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0729 10:24:10.574524 2910803 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 10:24:10.574628 2910803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-299185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 10:24:14.742660 2910803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-299185:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (4.167978007s)
	I0729 10:24:14.742691 2910803 kic.go:203] duration metric: took 4.168163861s to extract preloaded images to volume ...
	W0729 10:24:14.742824 2910803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0729 10:24:14.742930 2910803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0729 10:24:14.794941 2910803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-299185 --name addons-299185 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-299185 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-299185 --network addons-299185 --ip 192.168.49.2 --volume addons-299185:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0729 10:24:15.140396 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Running}}
	I0729 10:24:15.161936 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:15.181481 2910803 cli_runner.go:164] Run: docker exec addons-299185 stat /var/lib/dpkg/alternatives/iptables
	I0729 10:24:15.245744 2910803 oci.go:144] the created container "addons-299185" has a running status.
	I0729 10:24:15.245771 2910803 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa...
	I0729 10:24:16.312226 2910803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0729 10:24:16.334199 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:16.355117 2910803 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0729 10:24:16.355137 2910803 kic_runner.go:114] Args: [docker exec --privileged addons-299185 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0729 10:24:16.402625 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:16.420486 2910803 machine.go:94] provisionDockerMachine start ...
	I0729 10:24:16.420586 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:16.437474 2910803 main.go:141] libmachine: Using SSH client type: native
	I0729 10:24:16.437752 2910803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36469 <nil> <nil>}
	I0729 10:24:16.437767 2910803 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 10:24:16.567156 2910803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-299185
	
	I0729 10:24:16.567181 2910803 ubuntu.go:169] provisioning hostname "addons-299185"
	I0729 10:24:16.567246 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:16.583877 2910803 main.go:141] libmachine: Using SSH client type: native
	I0729 10:24:16.584128 2910803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36469 <nil> <nil>}
	I0729 10:24:16.584139 2910803 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-299185 && echo "addons-299185" | sudo tee /etc/hostname
	I0729 10:24:16.727679 2910803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-299185
	
	I0729 10:24:16.727767 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:16.744455 2910803 main.go:141] libmachine: Using SSH client type: native
	I0729 10:24:16.744718 2910803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36469 <nil> <nil>}
	I0729 10:24:16.744740 2910803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-299185' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-299185/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-299185' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 10:24:16.875712 2910803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 10:24:16.875739 2910803 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19337-2904404/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-2904404/.minikube}
	I0729 10:24:16.875772 2910803 ubuntu.go:177] setting up certificates
	I0729 10:24:16.875804 2910803 provision.go:84] configureAuth start
	I0729 10:24:16.875871 2910803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-299185
	I0729 10:24:16.892694 2910803 provision.go:143] copyHostCerts
	I0729 10:24:16.892784 2910803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.pem (1078 bytes)
	I0729 10:24:16.892909 2910803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/cert.pem (1123 bytes)
	I0729 10:24:16.892972 2910803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/key.pem (1675 bytes)
	I0729 10:24:16.893024 2910803 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem org=jenkins.addons-299185 san=[127.0.0.1 192.168.49.2 addons-299185 localhost minikube]
	I0729 10:24:17.165839 2910803 provision.go:177] copyRemoteCerts
	I0729 10:24:17.165917 2910803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 10:24:17.165958 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:17.183045 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:17.284265 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 10:24:17.307982 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0729 10:24:17.330710 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 10:24:17.353839 2910803 provision.go:87] duration metric: took 478.017332ms to configureAuth
	I0729 10:24:17.353866 2910803 ubuntu.go:193] setting minikube options for container-runtime
	I0729 10:24:17.354071 2910803 config.go:182] Loaded profile config "addons-299185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:24:17.354084 2910803 machine.go:97] duration metric: took 933.57886ms to provisionDockerMachine
	I0729 10:24:17.354091 2910803 client.go:171] duration metric: took 9.754950894s to LocalClient.Create
	I0729 10:24:17.354111 2910803 start.go:167] duration metric: took 9.755015657s to libmachine.API.Create "addons-299185"
	I0729 10:24:17.354124 2910803 start.go:293] postStartSetup for "addons-299185" (driver="docker")
	I0729 10:24:17.354134 2910803 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 10:24:17.354188 2910803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 10:24:17.354243 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:17.370239 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:17.464840 2910803 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 10:24:17.467775 2910803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0729 10:24:17.467822 2910803 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0729 10:24:17.467843 2910803 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0729 10:24:17.467850 2910803 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0729 10:24:17.467861 2910803 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-2904404/.minikube/addons for local assets ...
	I0729 10:24:17.467927 2910803 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-2904404/.minikube/files for local assets ...
	I0729 10:24:17.467958 2910803 start.go:296] duration metric: took 113.827783ms for postStartSetup
	I0729 10:24:17.468275 2910803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-299185
	I0729 10:24:17.484326 2910803 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/config.json ...
	I0729 10:24:17.484595 2910803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:24:17.484647 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:17.500371 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:17.592655 2910803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 10:24:17.596985 2910803 start.go:128] duration metric: took 10.000119564s to createHost
	I0729 10:24:17.597012 2910803 start.go:83] releasing machines lock for "addons-299185", held for 10.000274664s
	I0729 10:24:17.597099 2910803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-299185
	I0729 10:24:17.614887 2910803 ssh_runner.go:195] Run: cat /version.json
	I0729 10:24:17.614944 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:17.615200 2910803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 10:24:17.615260 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:17.635281 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:17.641691 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:17.728048 2910803 ssh_runner.go:195] Run: systemctl --version
	I0729 10:24:17.872389 2910803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 10:24:17.876684 2910803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0729 10:24:17.902277 2910803 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0729 10:24:17.902404 2910803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 10:24:17.934543 2910803 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0729 10:24:17.934582 2910803 start.go:495] detecting cgroup driver to use...
	I0729 10:24:17.934633 2910803 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0729 10:24:17.934731 2910803 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 10:24:17.947864 2910803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 10:24:17.959918 2910803 docker.go:217] disabling cri-docker service (if available) ...
	I0729 10:24:17.959991 2910803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 10:24:17.974288 2910803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 10:24:17.989206 2910803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 10:24:18.082103 2910803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 10:24:18.178293 2910803 docker.go:233] disabling docker service ...
	I0729 10:24:18.178363 2910803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 10:24:18.197661 2910803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 10:24:18.209773 2910803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 10:24:18.299718 2910803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 10:24:18.389278 2910803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 10:24:18.403215 2910803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 10:24:18.421636 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0729 10:24:18.431425 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 10:24:18.441377 2910803 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 10:24:18.441490 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 10:24:18.451713 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:24:18.461620 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 10:24:18.471296 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 10:24:18.481768 2910803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 10:24:18.491709 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 10:24:18.502618 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 10:24:18.513604 2910803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 10:24:18.524670 2910803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 10:24:18.533532 2910803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 10:24:18.541899 2910803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:24:18.631740 2910803 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 10:24:18.765320 2910803 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0729 10:24:18.765473 2910803 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0729 10:24:18.769018 2910803 start.go:563] Will wait 60s for crictl version
	I0729 10:24:18.769105 2910803 ssh_runner.go:195] Run: which crictl
	I0729 10:24:18.772447 2910803 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 10:24:18.811148 2910803 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0729 10:24:18.811274 2910803 ssh_runner.go:195] Run: containerd --version
	I0729 10:24:18.833712 2910803 ssh_runner.go:195] Run: containerd --version
	I0729 10:24:18.859331 2910803 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
	I0729 10:24:18.861358 2910803 cli_runner.go:164] Run: docker network inspect addons-299185 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 10:24:18.875744 2910803 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0729 10:24:18.879497 2910803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:24:18.890276 2910803 kubeadm.go:883] updating cluster {Name:addons-299185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-299185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 10:24:18.890414 2910803 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0729 10:24:18.890481 2910803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:24:18.931895 2910803 containerd.go:627] all images are preloaded for containerd runtime.
	I0729 10:24:18.931917 2910803 containerd.go:534] Images already preloaded, skipping extraction
	I0729 10:24:18.931982 2910803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 10:24:18.967068 2910803 containerd.go:627] all images are preloaded for containerd runtime.
	I0729 10:24:18.967090 2910803 cache_images.go:84] Images are preloaded, skipping loading
	I0729 10:24:18.967099 2910803 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.30.3 containerd true true} ...
	I0729 10:24:18.967204 2910803 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-299185 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:addons-299185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 10:24:18.967284 2910803 ssh_runner.go:195] Run: sudo crictl info
	I0729 10:24:19.005572 2910803 cni.go:84] Creating CNI manager for ""
	I0729 10:24:19.005673 2910803 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 10:24:19.005699 2910803 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 10:24:19.005761 2910803 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-299185 NodeName:addons-299185 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 10:24:19.005978 2910803 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-299185"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 10:24:19.006090 2910803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 10:24:19.015430 2910803 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 10:24:19.015531 2910803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 10:24:19.024541 2910803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0729 10:24:19.043235 2910803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 10:24:19.061219 2910803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0729 10:24:19.079389 2910803 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0729 10:24:19.082692 2910803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 10:24:19.093225 2910803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:24:19.172026 2910803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:24:19.187114 2910803 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185 for IP: 192.168.49.2
	I0729 10:24:19.187177 2910803 certs.go:194] generating shared ca certs ...
	I0729 10:24:19.187207 2910803 certs.go:226] acquiring lock for ca certs: {Name:mk2f7a1a044772cb2825bd46674f373ef156f656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:19.187364 2910803 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.key
	I0729 10:24:19.523918 2910803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.crt ...
	I0729 10:24:19.523950 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.crt: {Name:mkc12091dab0493ab84c3a8d84dbc9711be3e564 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:19.525517 2910803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.key ...
	I0729 10:24:19.525537 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.key: {Name:mk2d6bd35708316b6e70a8dbf00c26328ed0128e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:19.526270 2910803 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.key
	I0729 10:24:20.385385 2910803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.crt ...
	I0729 10:24:20.385417 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.crt: {Name:mk257d6bf64b4d9deadb263292e1657e892fbc43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:20.386125 2910803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.key ...
	I0729 10:24:20.386143 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.key: {Name:mkf61a4c70c16a6d07af5a4400a4cef6f0c5939a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:20.386650 2910803 certs.go:256] generating profile certs ...
	I0729 10:24:20.386719 2910803 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.key
	I0729 10:24:20.386739 2910803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt with IP's: []
	I0729 10:24:20.798213 2910803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt ...
	I0729 10:24:20.798247 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: {Name:mka3474c0b75620d6d9745dd780cd0f07b6fb9fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:20.798445 2910803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.key ...
	I0729 10:24:20.798458 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.key: {Name:mk9a4d178810fe0bd82babb21a408a343d116c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:20.798565 2910803 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.key.4ca8b208
	I0729 10:24:20.798589 2910803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.crt.4ca8b208 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0729 10:24:21.283519 2910803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.crt.4ca8b208 ...
	I0729 10:24:21.283552 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.crt.4ca8b208: {Name:mk7ce5e124f7234e6d53fdd91f9eee7155a9a821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:21.283775 2910803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.key.4ca8b208 ...
	I0729 10:24:21.283802 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.key.4ca8b208: {Name:mke4fabc8cb7ceffffff0bbb9772f10479917295 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:21.283903 2910803 certs.go:381] copying /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.crt.4ca8b208 -> /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.crt
	I0729 10:24:21.283985 2910803 certs.go:385] copying /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.key.4ca8b208 -> /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.key
	I0729 10:24:21.284042 2910803 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/proxy-client.key
	I0729 10:24:21.284063 2910803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/proxy-client.crt with IP's: []
	I0729 10:24:21.890365 2910803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/proxy-client.crt ...
	I0729 10:24:21.890424 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/proxy-client.crt: {Name:mk8554bf01e7c0791e4be41e581a5b34b8c4e0b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:21.890689 2910803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/proxy-client.key ...
	I0729 10:24:21.890706 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/proxy-client.key: {Name:mkeb8a63ef217347fbb135e1f86f98c2ae5b4a2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:21.891034 2910803 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 10:24:21.891090 2910803 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem (1078 bytes)
	I0729 10:24:21.891153 2910803 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem (1123 bytes)
	I0729 10:24:21.891231 2910803 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem (1675 bytes)
	I0729 10:24:21.892087 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 10:24:21.928586 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 10:24:21.962535 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 10:24:21.986293 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 10:24:22.029366 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0729 10:24:22.056191 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 10:24:22.081607 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 10:24:22.106376 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 10:24:22.130784 2910803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 10:24:22.158051 2910803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 10:24:22.178864 2910803 ssh_runner.go:195] Run: openssl version
	I0729 10:24:22.184899 2910803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 10:24:22.195195 2910803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:24:22.199343 2910803 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:24:22.199461 2910803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 10:24:22.207002 2910803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 10:24:22.218773 2910803 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 10:24:22.222069 2910803 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 10:24:22.222146 2910803 kubeadm.go:392] StartCluster: {Name:addons-299185 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:addons-299185 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:24:22.222224 2910803 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0729 10:24:22.222282 2910803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 10:24:22.259326 2910803 cri.go:89] found id: ""
	I0729 10:24:22.259444 2910803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 10:24:22.267998 2910803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 10:24:22.276806 2910803 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0729 10:24:22.276892 2910803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 10:24:22.287020 2910803 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 10:24:22.287040 2910803 kubeadm.go:157] found existing configuration files:
	
	I0729 10:24:22.287112 2910803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 10:24:22.295432 2910803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 10:24:22.295495 2910803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 10:24:22.303800 2910803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 10:24:22.312439 2910803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 10:24:22.312516 2910803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 10:24:22.320848 2910803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 10:24:22.329633 2910803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 10:24:22.329728 2910803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 10:24:22.338232 2910803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 10:24:22.347290 2910803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 10:24:22.347374 2910803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 10:24:22.355602 2910803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0729 10:24:22.401662 2910803 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 10:24:22.401767 2910803 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 10:24:22.441538 2910803 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0729 10:24:22.441636 2910803 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1065-aws
	I0729 10:24:22.441692 2910803 kubeadm.go:310] OS: Linux
	I0729 10:24:22.441757 2910803 kubeadm.go:310] CGROUPS_CPU: enabled
	I0729 10:24:22.441809 2910803 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0729 10:24:22.441881 2910803 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0729 10:24:22.441935 2910803 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0729 10:24:22.441996 2910803 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0729 10:24:22.442054 2910803 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0729 10:24:22.442127 2910803 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0729 10:24:22.442185 2910803 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0729 10:24:22.442241 2910803 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0729 10:24:22.514531 2910803 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 10:24:22.514697 2910803 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 10:24:22.514826 2910803 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 10:24:22.744134 2910803 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 10:24:22.748869 2910803 out.go:204]   - Generating certificates and keys ...
	I0729 10:24:22.748964 2910803 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 10:24:22.749039 2910803 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 10:24:23.714847 2910803 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 10:24:24.114993 2910803 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 10:24:24.521506 2910803 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 10:24:24.909207 2910803 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 10:24:25.145640 2910803 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 10:24:25.147180 2910803 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-299185 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0729 10:24:25.720986 2910803 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 10:24:25.721214 2910803 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-299185 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0729 10:24:26.788460 2910803 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 10:24:27.091851 2910803 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 10:24:27.408204 2910803 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 10:24:27.408285 2910803 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 10:24:27.855360 2910803 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 10:24:28.503628 2910803 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 10:24:28.641158 2910803 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 10:24:29.301462 2910803 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 10:24:30.086287 2910803 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 10:24:30.087199 2910803 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 10:24:30.092441 2910803 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 10:24:30.095389 2910803 out.go:204]   - Booting up control plane ...
	I0729 10:24:30.095504 2910803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 10:24:30.095583 2910803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 10:24:30.096595 2910803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 10:24:30.109138 2910803 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 10:24:30.110324 2910803 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 10:24:30.110568 2910803 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 10:24:30.212261 2910803 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 10:24:30.212348 2910803 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 10:24:31.711706 2910803 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501443177s
	I0729 10:24:31.711814 2910803 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 10:24:37.213623 2910803 kubeadm.go:310] [api-check] The API server is healthy after 5.501911285s
	I0729 10:24:37.234292 2910803 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0729 10:24:37.249238 2910803 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0729 10:24:37.281861 2910803 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0729 10:24:37.282052 2910803 kubeadm.go:310] [mark-control-plane] Marking the node addons-299185 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0729 10:24:37.296958 2910803 kubeadm.go:310] [bootstrap-token] Using token: fxrdcj.i5u6dfhuo4lgexwv
	I0729 10:24:37.298915 2910803 out.go:204]   - Configuring RBAC rules ...
	I0729 10:24:37.299038 2910803 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0729 10:24:37.307035 2910803 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0729 10:24:37.317796 2910803 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0729 10:24:37.322497 2910803 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0729 10:24:37.327216 2910803 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0729 10:24:37.331970 2910803 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0729 10:24:37.621188 2910803 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0729 10:24:38.073237 2910803 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0729 10:24:38.620441 2910803 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0729 10:24:38.621743 2910803 kubeadm.go:310] 
	I0729 10:24:38.621821 2910803 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0729 10:24:38.621833 2910803 kubeadm.go:310] 
	I0729 10:24:38.621908 2910803 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0729 10:24:38.621917 2910803 kubeadm.go:310] 
	I0729 10:24:38.621941 2910803 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0729 10:24:38.622003 2910803 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0729 10:24:38.622057 2910803 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0729 10:24:38.622066 2910803 kubeadm.go:310] 
	I0729 10:24:38.622117 2910803 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0729 10:24:38.622126 2910803 kubeadm.go:310] 
	I0729 10:24:38.622172 2910803 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0729 10:24:38.622179 2910803 kubeadm.go:310] 
	I0729 10:24:38.622229 2910803 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0729 10:24:38.622304 2910803 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0729 10:24:38.622379 2910803 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0729 10:24:38.622386 2910803 kubeadm.go:310] 
	I0729 10:24:38.622467 2910803 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0729 10:24:38.622544 2910803 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0729 10:24:38.622551 2910803 kubeadm.go:310] 
	I0729 10:24:38.622631 2910803 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token fxrdcj.i5u6dfhuo4lgexwv \
	I0729 10:24:38.622733 2910803 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5fdc8b7df061730f71abfba86bc6724866b015ab75e74120b4ddda2c1c9da248 \
	I0729 10:24:38.622756 2910803 kubeadm.go:310] 	--control-plane 
	I0729 10:24:38.622764 2910803 kubeadm.go:310] 
	I0729 10:24:38.622845 2910803 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0729 10:24:38.622851 2910803 kubeadm.go:310] 
	I0729 10:24:38.622929 2910803 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token fxrdcj.i5u6dfhuo4lgexwv \
	I0729 10:24:38.623030 2910803 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:5fdc8b7df061730f71abfba86bc6724866b015ab75e74120b4ddda2c1c9da248 
	I0729 10:24:38.626496 2910803 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1065-aws\n", err: exit status 1
	I0729 10:24:38.626609 2910803 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0729 10:24:38.626630 2910803 cni.go:84] Creating CNI manager for ""
	I0729 10:24:38.626641 2910803 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 10:24:38.629379 2910803 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0729 10:24:38.632039 2910803 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0729 10:24:38.635880 2910803 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0729 10:24:38.635902 2910803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0729 10:24:38.654179 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0729 10:24:38.923882 2910803 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0729 10:24:38.923988 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:38.924138 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-299185 minikube.k8s.io/updated_at=2024_07_29T10_24_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638 minikube.k8s.io/name=addons-299185 minikube.k8s.io/primary=true
	I0729 10:24:39.115540 2910803 ops.go:34] apiserver oom_adj: -16
	I0729 10:24:39.115639 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:39.616686 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:40.115925 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:40.615948 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:41.116705 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:41.617207 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:42.116596 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:42.615843 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:43.116383 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:43.616683 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:44.116261 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:44.616353 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:45.116640 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:45.616501 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:46.116647 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:46.615844 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:47.115777 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:47.615833 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:48.115823 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:48.616275 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:49.115760 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:49.615907 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:50.116537 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:50.616156 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:51.115813 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:51.616151 2910803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0729 10:24:51.776494 2910803 kubeadm.go:1113] duration metric: took 12.852557557s to wait for elevateKubeSystemPrivileges
	I0729 10:24:51.776526 2910803 kubeadm.go:394] duration metric: took 29.554414667s to StartCluster
	I0729 10:24:51.776544 2910803 settings.go:142] acquiring lock: {Name:mk13aac0349b1bb0c6badbadf5082ad34f96b8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:51.777112 2910803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 10:24:51.777579 2910803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/kubeconfig: {Name:mkeecad1fa513e831370425fbda0ceb7b2cb39f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 10:24:51.778113 2910803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0729 10:24:51.778142 2910803 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0729 10:24:51.778424 2910803 config.go:182] Loaded profile config "addons-299185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:24:51.778457 2910803 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0729 10:24:51.778556 2910803 addons.go:69] Setting yakd=true in profile "addons-299185"
	I0729 10:24:51.778579 2910803 addons.go:234] Setting addon yakd=true in "addons-299185"
	I0729 10:24:51.778607 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.779045 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.779640 2910803 addons.go:69] Setting metrics-server=true in profile "addons-299185"
	I0729 10:24:51.779664 2910803 addons.go:234] Setting addon metrics-server=true in "addons-299185"
	I0729 10:24:51.779691 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.780149 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.780362 2910803 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-299185"
	I0729 10:24:51.780458 2910803 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-299185"
	I0729 10:24:51.780524 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.781074 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.781630 2910803 addons.go:69] Setting registry=true in profile "addons-299185"
	I0729 10:24:51.781665 2910803 addons.go:234] Setting addon registry=true in "addons-299185"
	I0729 10:24:51.781693 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.782066 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.783282 2910803 addons.go:69] Setting cloud-spanner=true in profile "addons-299185"
	I0729 10:24:51.783321 2910803 addons.go:234] Setting addon cloud-spanner=true in "addons-299185"
	I0729 10:24:51.783352 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.783750 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.798854 2910803 addons.go:69] Setting storage-provisioner=true in profile "addons-299185"
	I0729 10:24:51.798946 2910803 addons.go:234] Setting addon storage-provisioner=true in "addons-299185"
	I0729 10:24:51.799014 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.799421 2910803 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-299185"
	I0729 10:24:51.799482 2910803 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-299185"
	I0729 10:24:51.799497 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.799510 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.799913 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.823872 2910803 addons.go:69] Setting default-storageclass=true in profile "addons-299185"
	I0729 10:24:51.823928 2910803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-299185"
	I0729 10:24:51.824242 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.824394 2910803 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-299185"
	I0729 10:24:51.824424 2910803 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-299185"
	I0729 10:24:51.824659 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.857853 2910803 addons.go:69] Setting volcano=true in profile "addons-299185"
	I0729 10:24:51.858116 2910803 addons.go:234] Setting addon volcano=true in "addons-299185"
	I0729 10:24:51.858217 2910803 out.go:177] * Verifying Kubernetes components...
	I0729 10:24:51.858287 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.859516 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.886049 2910803 addons.go:69] Setting volumesnapshots=true in profile "addons-299185"
	I0729 10:24:51.886150 2910803 addons.go:234] Setting addon volumesnapshots=true in "addons-299185"
	I0729 10:24:51.886215 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.888162 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.924298 2910803 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.0
	I0729 10:24:51.928937 2910803 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0729 10:24:51.857895 2910803 addons.go:69] Setting gcp-auth=true in profile "addons-299185"
	I0729 10:24:51.936962 2910803 mustload.go:65] Loading cluster: addons-299185
	I0729 10:24:51.937219 2910803 config.go:182] Loaded profile config "addons-299185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:24:51.937625 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.857903 2910803 addons.go:69] Setting ingress=true in profile "addons-299185"
	I0729 10:24:51.953257 2910803 addons.go:234] Setting addon ingress=true in "addons-299185"
	I0729 10:24:51.953334 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.953800 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.857909 2910803 addons.go:69] Setting ingress-dns=true in profile "addons-299185"
	I0729 10:24:51.959291 2910803 addons.go:234] Setting addon ingress-dns=true in "addons-299185"
	I0729 10:24:51.959334 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.959746 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.967859 2910803 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0729 10:24:51.857913 2910803 addons.go:69] Setting inspektor-gadget=true in profile "addons-299185"
	I0729 10:24:51.973211 2910803 addons.go:234] Setting addon inspektor-gadget=true in "addons-299185"
	I0729 10:24:51.973264 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:51.973846 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:51.976166 2910803 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0729 10:24:51.976190 2910803 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0729 10:24:51.976265 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:51.985925 2910803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 10:24:51.989087 2910803 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 10:24:51.989109 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0729 10:24:51.989175 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.016545 2910803 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0729 10:24:52.019615 2910803 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 10:24:52.019665 2910803 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 10:24:52.019823 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.020138 2910803 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0729 10:24:52.022796 2910803 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0729 10:24:52.022866 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0729 10:24:52.023029 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.029850 2910803 out.go:177]   - Using image docker.io/registry:2.8.3
	I0729 10:24:52.037768 2910803 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0729 10:24:52.037793 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0729 10:24:52.037865 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.054756 2910803 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 10:24:52.054967 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0729 10:24:52.056791 2910803 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:24:52.056818 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 10:24:52.056890 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.058878 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0729 10:24:52.061194 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0729 10:24:52.063919 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0729 10:24:52.069077 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0729 10:24:52.082104 2910803 addons.go:234] Setting addon default-storageclass=true in "addons-299185"
	I0729 10:24:52.082151 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:52.082575 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:52.083856 2910803 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0729 10:24:52.088727 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0729 10:24:52.091271 2910803 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0729 10:24:52.096698 2910803 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0729 10:24:52.099004 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0729 10:24:52.100215 2910803 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-299185"
	I0729 10:24:52.100305 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:52.100861 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:52.110440 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0729 10:24:52.113310 2910803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0729 10:24:52.113396 2910803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0729 10:24:52.113501 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.130891 2910803 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0729 10:24:52.131418 2910803 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0729 10:24:52.131468 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0729 10:24:52.131571 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.138569 2910803 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 10:24:52.138594 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0729 10:24:52.138659 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.150534 2910803 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0729 10:24:52.153511 2910803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0729 10:24:52.153535 2910803 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0729 10:24:52.153602 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.164989 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:52.170659 2910803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.1
	I0729 10:24:52.173496 2910803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:24:52.177281 2910803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:24:52.184274 2910803 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 10:24:52.184344 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0729 10:24:52.184444 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.211876 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.212245 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.214829 2910803 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0
	I0729 10:24:52.219852 2910803 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0729 10:24:52.219882 2910803 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0729 10:24:52.219963 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.292082 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.297473 2910803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0729 10:24:52.322930 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.354908 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.359541 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.363359 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.369313 2910803 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 10:24:52.369335 2910803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 10:24:52.369405 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.398546 2910803 out.go:177]   - Using image docker.io/busybox:stable
	I0729 10:24:52.399509 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.402002 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.410952 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.411062 2910803 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0729 10:24:52.419980 2910803 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 10:24:52.420012 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0729 10:24:52.420080 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:52.421938 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.436051 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.458422 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.481192 2910803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 10:24:52.483898 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:52.908355 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0729 10:24:52.947644 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0729 10:24:52.953938 2910803 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 10:24:52.953964 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0729 10:24:53.103763 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 10:24:53.135476 2910803 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0729 10:24:53.135551 2910803 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0729 10:24:53.203909 2910803 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0729 10:24:53.203984 2910803 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0729 10:24:53.216962 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0729 10:24:53.256509 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0729 10:24:53.266670 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0729 10:24:53.349275 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0729 10:24:53.372710 2910803 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0729 10:24:53.372737 2910803 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0729 10:24:53.399545 2910803 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0729 10:24:53.399587 2910803 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0729 10:24:53.471671 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 10:24:53.490079 2910803 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0729 10:24:53.490107 2910803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0729 10:24:53.517827 2910803 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 10:24:53.517865 2910803 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 10:24:53.531277 2910803 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0729 10:24:53.531316 2910803 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0729 10:24:53.533546 2910803 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0729 10:24:53.533569 2910803 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0729 10:24:53.854470 2910803 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0729 10:24:53.854512 2910803 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0729 10:24:53.896205 2910803 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0729 10:24:53.896235 2910803 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0729 10:24:53.898831 2910803 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 10:24:53.898857 2910803 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 10:24:53.901146 2910803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0729 10:24:53.901170 2910803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0729 10:24:53.930509 2910803 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0729 10:24:53.930538 2910803 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0729 10:24:53.983858 2910803 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0729 10:24:53.983880 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0729 10:24:54.100519 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 10:24:54.137390 2910803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0729 10:24:54.137421 2910803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0729 10:24:54.199546 2910803 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0729 10:24:54.199575 2910803 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0729 10:24:54.305488 2910803 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0729 10:24:54.305528 2910803 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0729 10:24:54.307552 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0729 10:24:54.330923 2910803 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0729 10:24:54.330965 2910803 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0729 10:24:54.404753 2910803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0729 10:24:54.404779 2910803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0729 10:24:54.438923 2910803 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0729 10:24:54.438958 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0729 10:24:54.463511 2910803 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0729 10:24:54.463555 2910803 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0729 10:24:54.583650 2910803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0729 10:24:54.583692 2910803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0729 10:24:54.619168 2910803 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0729 10:24:54.619210 2910803 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0729 10:24:54.638868 2910803 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:24:54.638901 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0729 10:24:54.665923 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0729 10:24:54.693627 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:24:54.752331 2910803 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 10:24:54.752359 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0729 10:24:54.791324 2910803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0729 10:24:54.791354 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0729 10:24:55.019173 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0729 10:24:55.061805 2910803 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.580576798s)
	I0729 10:24:55.061878 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.153451501s)
	I0729 10:24:55.062062 2910803 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.764563808s)
	I0729 10:24:55.062109 2910803 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0729 10:24:55.063661 2910803 node_ready.go:35] waiting up to 6m0s for node "addons-299185" to be "Ready" ...
	I0729 10:24:55.067966 2910803 node_ready.go:49] node "addons-299185" has status "Ready":"True"
	I0729 10:24:55.068043 2910803 node_ready.go:38] duration metric: took 4.032517ms for node "addons-299185" to be "Ready" ...
	I0729 10:24:55.068068 2910803 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:24:55.086527 2910803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-prm8j" in "kube-system" namespace to be "Ready" ...
	I0729 10:24:55.201804 2910803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0729 10:24:55.201827 2910803 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0729 10:24:55.241052 2910803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0729 10:24:55.241072 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0729 10:24:55.266184 2910803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0729 10:24:55.266258 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0729 10:24:55.295279 2910803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 10:24:55.295353 2910803 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0729 10:24:55.422212 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.474512117s)
	I0729 10:24:55.422314 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.318420873s)
	I0729 10:24:55.583029 2910803 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-299185" context rescaled to 1 replicas
	I0729 10:24:55.746253 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0729 10:24:56.147603 2910803 pod_ready.go:97] pod "coredns-7db6d8ff4d-prm8j" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-07-29 10:24:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0x400174d18a AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 10:24:56.147696 2910803 pod_ready.go:81] duration metric: took 1.061078203s for pod "coredns-7db6d8ff4d-prm8j" in "kube-system" namespace to be "Ready" ...
	E0729 10:24:56.147722 2910803 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-prm8j" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-07-29 10:24:51 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-07-29 10:24:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID: ContainerID: Started:0x400174d18a AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0729 10:24:56.147765 2910803 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-zpw4j" in "kube-system" namespace to be "Ready" ...
	I0729 10:24:58.158681 2910803 pod_ready.go:102] pod "coredns-7db6d8ff4d-zpw4j" in "kube-system" namespace has status "Ready":"False"
	I0729 10:24:59.398193 2910803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0729 10:24:59.398376 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:59.439675 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:24:59.736708 2910803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0729 10:24:59.869257 2910803 addons.go:234] Setting addon gcp-auth=true in "addons-299185"
	I0729 10:24:59.869365 2910803 host.go:66] Checking if "addons-299185" exists ...
	I0729 10:24:59.869899 2910803 cli_runner.go:164] Run: docker container inspect addons-299185 --format={{.State.Status}}
	I0729 10:24:59.894795 2910803 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0729 10:24:59.894847 2910803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-299185
	I0729 10:24:59.932926 2910803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36469 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/addons-299185/id_rsa Username:docker}
	I0729 10:25:00.210170 2910803 pod_ready.go:102] pod "coredns-7db6d8ff4d-zpw4j" in "kube-system" namespace has status "Ready":"False"
	I0729 10:25:02.162942 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.906385617s)
	I0729 10:25:02.163154 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.946164386s)
	I0729 10:25:02.163199 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.896456106s)
	I0729 10:25:02.163244 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.691543068s)
	I0729 10:25:02.163247 2910803 addons.go:475] Verifying addon ingress=true in "addons-299185"
	I0729 10:25:02.163381 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.06279874s)
	I0729 10:25:02.163415 2910803 addons.go:475] Verifying addon metrics-server=true in "addons-299185"
	I0729 10:25:02.163495 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.855918555s)
	I0729 10:25:02.163525 2910803 addons.go:475] Verifying addon registry=true in "addons-299185"
	I0729 10:25:02.163205 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.813910231s)
	I0729 10:25:02.163660 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.497707581s)
	I0729 10:25:02.163919 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.470256647s)
	W0729 10:25:02.163954 2910803 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 10:25:02.163979 2910803 retry.go:31] will retry after 155.946024ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0729 10:25:02.164053 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.144847379s)
	I0729 10:25:02.165642 2910803 out.go:177] * Verifying ingress addon...
	I0729 10:25:02.169043 2910803 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-299185 service yakd-dashboard -n yakd-dashboard
	
	I0729 10:25:02.169138 2910803 out.go:177] * Verifying registry addon...
	I0729 10:25:02.172141 2910803 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0729 10:25:02.173782 2910803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0729 10:25:02.188817 2910803 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0729 10:25:02.188849 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:02.197214 2910803 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0729 10:25:02.197238 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:02.320428 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0729 10:25:02.691082 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:02.692577 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:02.698001 2910803 pod_ready.go:102] pod "coredns-7db6d8ff4d-zpw4j" in "kube-system" namespace has status "Ready":"False"
	I0729 10:25:03.028245 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.28189387s)
	I0729 10:25:03.028283 2910803 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-299185"
	I0729 10:25:03.028471 2910803 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.133653899s)
	I0729 10:25:03.031615 2910803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0729 10:25:03.031697 2910803 out.go:177] * Verifying csi-hostpath-driver addon...
	I0729 10:25:03.038843 2910803 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0729 10:25:03.039812 2910803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0729 10:25:03.041547 2910803 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0729 10:25:03.041577 2910803 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0729 10:25:03.046311 2910803 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0729 10:25:03.046344 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:03.131426 2910803 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0729 10:25:03.131456 2910803 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0729 10:25:03.171119 2910803 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 10:25:03.171207 2910803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0729 10:25:03.188534 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:03.198840 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:03.205137 2910803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0729 10:25:03.546002 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:03.676582 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:03.679751 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:03.944653 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.624174806s)
	I0729 10:25:04.047129 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:04.182475 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:04.184090 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:04.276942 2910803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.071722134s)
	I0729 10:25:04.280639 2910803 addons.go:475] Verifying addon gcp-auth=true in "addons-299185"
	I0729 10:25:04.284814 2910803 out.go:177] * Verifying gcp-auth addon...
	I0729 10:25:04.287817 2910803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0729 10:25:04.290586 2910803 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 10:25:04.545550 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:04.676840 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:04.680368 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:05.046604 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:05.157015 2910803 pod_ready.go:102] pod "coredns-7db6d8ff4d-zpw4j" in "kube-system" namespace has status "Ready":"False"
	I0729 10:25:05.180049 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:05.181320 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:05.545938 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:05.681744 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:05.681939 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:06.046550 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:06.180515 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:06.182355 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:06.545964 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:06.679654 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:06.680230 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:07.046182 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:07.177812 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:07.181413 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:07.546498 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:07.655124 2910803 pod_ready.go:102] pod "coredns-7db6d8ff4d-zpw4j" in "kube-system" namespace has status "Ready":"False"
	I0729 10:25:07.680738 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:07.682050 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:08.045812 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:08.183141 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:08.184138 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:08.547068 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:08.656025 2910803 pod_ready.go:92] pod "coredns-7db6d8ff4d-zpw4j" in "kube-system" namespace has status "Ready":"True"
	I0729 10:25:08.656052 2910803 pod_ready.go:81] duration metric: took 12.508245282s for pod "coredns-7db6d8ff4d-zpw4j" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.656064 2910803 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-299185" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.661667 2910803 pod_ready.go:92] pod "etcd-addons-299185" in "kube-system" namespace has status "Ready":"True"
	I0729 10:25:08.661692 2910803 pod_ready.go:81] duration metric: took 5.595464ms for pod "etcd-addons-299185" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.661732 2910803 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-299185" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.667600 2910803 pod_ready.go:92] pod "kube-apiserver-addons-299185" in "kube-system" namespace has status "Ready":"True"
	I0729 10:25:08.667627 2910803 pod_ready.go:81] duration metric: took 5.878844ms for pod "kube-apiserver-addons-299185" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.667639 2910803 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-299185" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.679026 2910803 pod_ready.go:92] pod "kube-controller-manager-addons-299185" in "kube-system" namespace has status "Ready":"True"
	I0729 10:25:08.679053 2910803 pod_ready.go:81] duration metric: took 11.405509ms for pod "kube-controller-manager-addons-299185" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.679065 2910803 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ffxlq" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.681076 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:08.681766 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:08.687300 2910803 pod_ready.go:92] pod "kube-proxy-ffxlq" in "kube-system" namespace has status "Ready":"True"
	I0729 10:25:08.687322 2910803 pod_ready.go:81] duration metric: took 8.223737ms for pod "kube-proxy-ffxlq" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:08.687334 2910803 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-299185" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:09.046918 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:09.052119 2910803 pod_ready.go:92] pod "kube-scheduler-addons-299185" in "kube-system" namespace has status "Ready":"True"
	I0729 10:25:09.052185 2910803 pod_ready.go:81] duration metric: took 364.821328ms for pod "kube-scheduler-addons-299185" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:09.052214 2910803 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-djkh5" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:09.177107 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:09.181659 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:09.561860 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:09.676580 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:09.679419 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:10.047480 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:10.176762 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:10.179907 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:10.546335 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:10.678144 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:10.678856 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:11.054694 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:11.062250 2910803 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-djkh5" in "kube-system" namespace has status "Ready":"False"
	I0729 10:25:11.184256 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:11.184602 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:11.546733 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:11.679668 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:11.682669 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:12.046835 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:12.180603 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:12.182113 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:12.545210 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:12.676943 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:12.680321 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:13.046748 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:13.178366 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:13.188996 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:13.545705 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:13.558427 2910803 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-djkh5" in "kube-system" namespace has status "Ready":"False"
	I0729 10:25:13.677572 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:13.680541 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:14.045835 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:14.176996 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:14.181301 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:14.546250 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:14.677914 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:14.682471 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:15.047650 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:15.191875 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:15.192955 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:15.546424 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:15.567042 2910803 pod_ready.go:102] pod "nvidia-device-plugin-daemonset-djkh5" in "kube-system" namespace has status "Ready":"False"
	I0729 10:25:15.676210 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:15.679251 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:16.046941 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:16.203054 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:16.204111 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:16.546478 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:16.558679 2910803 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-djkh5" in "kube-system" namespace has status "Ready":"True"
	I0729 10:25:16.558706 2910803 pod_ready.go:81] duration metric: took 7.506464606s for pod "nvidia-device-plugin-daemonset-djkh5" in "kube-system" namespace to be "Ready" ...
	I0729 10:25:16.558715 2910803 pod_ready.go:38] duration metric: took 21.490620879s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 10:25:16.558730 2910803 api_server.go:52] waiting for apiserver process to appear ...
	I0729 10:25:16.558791 2910803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:25:16.579698 2910803 api_server.go:72] duration metric: took 24.80152137s to wait for apiserver process to appear ...
	I0729 10:25:16.579724 2910803 api_server.go:88] waiting for apiserver healthz status ...
	I0729 10:25:16.579743 2910803 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0729 10:25:16.588321 2910803 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0729 10:25:16.589427 2910803 api_server.go:141] control plane version: v1.30.3
	I0729 10:25:16.589455 2910803 api_server.go:131] duration metric: took 9.723193ms to wait for apiserver health ...
	I0729 10:25:16.589484 2910803 system_pods.go:43] waiting for kube-system pods to appear ...
	I0729 10:25:16.601101 2910803 system_pods.go:59] 18 kube-system pods found
	I0729 10:25:16.601142 2910803 system_pods.go:61] "coredns-7db6d8ff4d-zpw4j" [dedf91b1-0a90-40f3-a6aa-166b1a4d6288] Running
	I0729 10:25:16.601152 2910803 system_pods.go:61] "csi-hostpath-attacher-0" [d0979073-a90c-4b88-b16f-9a7d021cc8b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 10:25:16.601178 2910803 system_pods.go:61] "csi-hostpath-resizer-0" [b0a322a4-f3d1-49a6-ba0b-5c124dc23072] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 10:25:16.601198 2910803 system_pods.go:61] "csi-hostpathplugin-qbrtc" [d3742046-72d9-4bbe-b152-d539af45eb7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 10:25:16.601209 2910803 system_pods.go:61] "etcd-addons-299185" [45dc93a7-46cc-499c-a08d-47fa7b1782c7] Running
	I0729 10:25:16.601214 2910803 system_pods.go:61] "kindnet-f6x9v" [1887d585-0f9c-4b0f-a272-b71f3ac1e244] Running
	I0729 10:25:16.601218 2910803 system_pods.go:61] "kube-apiserver-addons-299185" [a204ed0a-bf54-4fe7-b898-08fdd080c312] Running
	I0729 10:25:16.601228 2910803 system_pods.go:61] "kube-controller-manager-addons-299185" [5fd151c4-8842-4dca-bc6d-58ec30326c89] Running
	I0729 10:25:16.601233 2910803 system_pods.go:61] "kube-ingress-dns-minikube" [e1e43055-7108-422a-b462-7af60c31b890] Running
	I0729 10:25:16.601237 2910803 system_pods.go:61] "kube-proxy-ffxlq" [e64b5e6a-37f1-4cbf-9378-52f4020ac12f] Running
	I0729 10:25:16.601241 2910803 system_pods.go:61] "kube-scheduler-addons-299185" [ac0c8a36-45e7-485a-bd89-a38514d4031b] Running
	I0729 10:25:16.601261 2910803 system_pods.go:61] "metrics-server-c59844bb4-wtmps" [e34b40fc-8809-456b-9af1-ceb94b883425] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 10:25:16.601272 2910803 system_pods.go:61] "nvidia-device-plugin-daemonset-djkh5" [8b50190f-ddbf-4864-928b-7b96c73d1e81] Running
	I0729 10:25:16.601278 2910803 system_pods.go:61] "registry-656c9c8d9c-4z48h" [cab9244c-6d04-49d8-a796-1f4e4c1c4a12] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 10:25:16.601300 2910803 system_pods.go:61] "registry-proxy-72wcs" [54696946-5a63-4251-be62-cb68e1b927df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 10:25:16.601309 2910803 system_pods.go:61] "snapshot-controller-745499f584-h9r9l" [31bf5461-6376-459f-ae4c-406b8d4ef12b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:25:16.601321 2910803 system_pods.go:61] "snapshot-controller-745499f584-l57rs" [6922faa4-24f0-454c-a8cd-e7b6919d8fe9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:25:16.601325 2910803 system_pods.go:61] "storage-provisioner" [174d17f1-4536-426d-b9f3-0d0e97aaa966] Running
	I0729 10:25:16.601333 2910803 system_pods.go:74] duration metric: took 11.835809ms to wait for pod list to return data ...
	I0729 10:25:16.601345 2910803 default_sa.go:34] waiting for default service account to be created ...
	I0729 10:25:16.603895 2910803 default_sa.go:45] found service account: "default"
	I0729 10:25:16.603922 2910803 default_sa.go:55] duration metric: took 2.569541ms for default service account to be created ...
	I0729 10:25:16.603932 2910803 system_pods.go:116] waiting for k8s-apps to be running ...
	I0729 10:25:16.614938 2910803 system_pods.go:86] 18 kube-system pods found
	I0729 10:25:16.614975 2910803 system_pods.go:89] "coredns-7db6d8ff4d-zpw4j" [dedf91b1-0a90-40f3-a6aa-166b1a4d6288] Running
	I0729 10:25:16.614986 2910803 system_pods.go:89] "csi-hostpath-attacher-0" [d0979073-a90c-4b88-b16f-9a7d021cc8b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0729 10:25:16.614993 2910803 system_pods.go:89] "csi-hostpath-resizer-0" [b0a322a4-f3d1-49a6-ba0b-5c124dc23072] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0729 10:25:16.615001 2910803 system_pods.go:89] "csi-hostpathplugin-qbrtc" [d3742046-72d9-4bbe-b152-d539af45eb7b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0729 10:25:16.615006 2910803 system_pods.go:89] "etcd-addons-299185" [45dc93a7-46cc-499c-a08d-47fa7b1782c7] Running
	I0729 10:25:16.615012 2910803 system_pods.go:89] "kindnet-f6x9v" [1887d585-0f9c-4b0f-a272-b71f3ac1e244] Running
	I0729 10:25:16.615017 2910803 system_pods.go:89] "kube-apiserver-addons-299185" [a204ed0a-bf54-4fe7-b898-08fdd080c312] Running
	I0729 10:25:16.615022 2910803 system_pods.go:89] "kube-controller-manager-addons-299185" [5fd151c4-8842-4dca-bc6d-58ec30326c89] Running
	I0729 10:25:16.615027 2910803 system_pods.go:89] "kube-ingress-dns-minikube" [e1e43055-7108-422a-b462-7af60c31b890] Running
	I0729 10:25:16.615031 2910803 system_pods.go:89] "kube-proxy-ffxlq" [e64b5e6a-37f1-4cbf-9378-52f4020ac12f] Running
	I0729 10:25:16.615035 2910803 system_pods.go:89] "kube-scheduler-addons-299185" [ac0c8a36-45e7-485a-bd89-a38514d4031b] Running
	I0729 10:25:16.615041 2910803 system_pods.go:89] "metrics-server-c59844bb4-wtmps" [e34b40fc-8809-456b-9af1-ceb94b883425] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0729 10:25:16.615045 2910803 system_pods.go:89] "nvidia-device-plugin-daemonset-djkh5" [8b50190f-ddbf-4864-928b-7b96c73d1e81] Running
	I0729 10:25:16.615051 2910803 system_pods.go:89] "registry-656c9c8d9c-4z48h" [cab9244c-6d04-49d8-a796-1f4e4c1c4a12] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0729 10:25:16.615057 2910803 system_pods.go:89] "registry-proxy-72wcs" [54696946-5a63-4251-be62-cb68e1b927df] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0729 10:25:16.615066 2910803 system_pods.go:89] "snapshot-controller-745499f584-h9r9l" [31bf5461-6376-459f-ae4c-406b8d4ef12b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:25:16.615072 2910803 system_pods.go:89] "snapshot-controller-745499f584-l57rs" [6922faa4-24f0-454c-a8cd-e7b6919d8fe9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0729 10:25:16.615076 2910803 system_pods.go:89] "storage-provisioner" [174d17f1-4536-426d-b9f3-0d0e97aaa966] Running
	I0729 10:25:16.615083 2910803 system_pods.go:126] duration metric: took 11.144965ms to wait for k8s-apps to be running ...
	I0729 10:25:16.615091 2910803 system_svc.go:44] waiting for kubelet service to be running ....
	I0729 10:25:16.615149 2910803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:25:16.657945 2910803 system_svc.go:56] duration metric: took 42.841239ms WaitForService to wait for kubelet
	I0729 10:25:16.657973 2910803 kubeadm.go:582] duration metric: took 24.879802647s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 10:25:16.658005 2910803 node_conditions.go:102] verifying NodePressure condition ...
	I0729 10:25:16.663003 2910803 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0729 10:25:16.663042 2910803 node_conditions.go:123] node cpu capacity is 2
	I0729 10:25:16.663056 2910803 node_conditions.go:105] duration metric: took 5.044517ms to run NodePressure ...
	I0729 10:25:16.663069 2910803 start.go:241] waiting for startup goroutines ...
	I0729 10:25:16.684727 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:16.685934 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:17.046197 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:17.178798 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:17.181996 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:17.547132 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:17.685739 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:17.686975 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:18.045932 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:18.182210 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:18.182257 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:18.545477 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:18.679492 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:18.680630 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:19.046548 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:19.176754 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:19.179537 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:19.546700 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:19.683238 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:19.684531 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:20.046407 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:20.179719 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:20.180885 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:20.545880 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:20.677677 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:20.678578 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:21.053900 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:21.178283 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:21.180929 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:21.546290 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:21.677559 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:21.681236 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:22.046144 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:22.176802 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:22.180093 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:22.548371 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:22.684074 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:22.686611 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:23.045363 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:23.179442 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:23.180083 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:23.545629 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:23.676848 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:23.679895 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:24.046524 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:24.177347 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:24.181588 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:24.546281 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:24.676301 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:24.678789 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:25.047149 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:25.178390 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:25.181055 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:25.546560 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:25.677967 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:25.679512 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:26.046284 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:26.178036 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:26.181673 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:26.545948 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:26.677517 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:26.680101 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:27.047369 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:27.177639 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:27.180403 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:27.546496 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:27.679703 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:27.690441 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:28.046647 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:28.177882 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:28.181564 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:28.546940 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:28.678068 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:28.679477 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:29.045462 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:29.179265 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:29.180795 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:29.546808 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:29.677438 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:29.680408 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:30.050781 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:30.181659 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:30.182718 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:30.557495 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:30.677802 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:30.686396 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:31.046670 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:31.180451 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:31.181727 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:31.550349 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:31.681333 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:31.682904 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:32.047339 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:32.180564 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:32.181013 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:32.548043 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:32.680730 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:32.685942 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:33.046057 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:33.178186 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:33.180680 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:33.545436 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:33.676823 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:33.679192 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:34.046044 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:34.176970 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:34.181980 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:34.547425 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:34.676787 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:34.680629 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:35.045807 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:35.179375 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:35.180677 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:35.545461 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:35.676176 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:35.678947 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:36.046317 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:36.180559 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:36.181438 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:36.546037 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:36.678314 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:36.681839 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:37.046717 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:37.180024 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0729 10:25:37.181512 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:37.546070 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:37.676809 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:37.683771 2910803 kapi.go:107] duration metric: took 35.509986264s to wait for kubernetes.io/minikube-addons=registry ...
	I0729 10:25:38.067092 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:38.180131 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:38.546214 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:38.676945 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:39.046387 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:39.177782 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:39.545853 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:39.681827 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:40.047761 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:40.177366 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:40.546562 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:40.677041 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:41.045053 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:41.177361 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:41.546383 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:41.685771 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:42.045979 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:42.177125 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:42.546709 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:42.676971 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:43.046093 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:43.176213 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:43.545122 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:43.676543 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:44.045637 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:44.177363 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:44.546512 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:44.677453 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:45.048125 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:45.185339 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:45.546091 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:45.688045 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:46.045824 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:46.176922 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:46.545647 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:46.677998 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:47.052052 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:47.177411 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:47.546280 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:47.679274 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:48.045921 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:48.179498 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:48.545348 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:48.676698 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:49.051286 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:49.177771 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:49.546348 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:49.676781 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:50.047540 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:50.178465 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:50.545851 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:50.677482 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:51.046643 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:51.176495 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:51.546394 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:51.677776 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:52.046709 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:52.179978 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:52.546365 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0729 10:25:52.678755 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:53.045327 2910803 kapi.go:107] duration metric: took 50.005564944s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0729 10:25:53.176672 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:53.677045 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:54.176943 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:54.677490 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:55.177102 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:55.678000 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:56.176615 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:56.676382 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:57.176544 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:57.677001 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:58.177257 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:58.676445 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:59.177044 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:25:59.677314 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:00.205677 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:00.676891 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:01.177296 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:01.676426 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:02.176170 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:02.676650 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:03.176867 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:03.677241 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:04.177464 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:04.678293 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:05.176304 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:05.676888 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:06.176665 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:06.676408 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:07.176191 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:07.677753 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:08.177346 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:08.677280 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:09.176414 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:09.677421 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:10.177637 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:10.677285 2910803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0729 10:26:11.176388 2910803 kapi.go:107] duration metric: took 1m9.004243696s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0729 10:26:27.294214 2910803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0729 10:26:27.294237 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:26:27.792318 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:26:28.292051 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:26:28.792143 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:26:29.292098 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:26:29.791969 2910803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0729 10:26:30.292056 2910803 kapi.go:107] duration metric: took 1m26.004269692s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0729 10:26:30.294292 2910803 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-299185 cluster.
	I0729 10:26:30.297302 2910803 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0729 10:26:30.299648 2910803 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0729 10:26:30.301789 2910803 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, default-storageclass, volcano, storage-provisioner, metrics-server, ingress-dns, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0729 10:26:30.303934 2910803 addons.go:510] duration metric: took 1m38.525466295s for enable addons: enabled=[nvidia-device-plugin cloud-spanner default-storageclass volcano storage-provisioner metrics-server ingress-dns inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0729 10:26:30.303994 2910803 start.go:246] waiting for cluster config update ...
	I0729 10:26:30.304630 2910803 start.go:255] writing updated cluster config ...
	I0729 10:26:30.304962 2910803 ssh_runner.go:195] Run: rm -f paused
	I0729 10:26:30.650528 2910803 start.go:600] kubectl: 1.30.3, cluster: 1.30.3 (minor skew: 0)
	I0729 10:26:30.652852 2910803 out.go:177] * Done! kubectl is now configured to use "addons-299185" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	585f52c7d188a       d1ca868ab82aa       About a minute ago   Exited              gadget                                   5                   450304cdc8805       gadget-hnltc
	f8818941fbb0d       6ef582f3ec844       3 minutes ago        Running             gcp-auth                                 0                   dc6f611e71e11       gcp-auth-5db96cd9b4-nwk2b
	e23fcef0cf621       24f8f979639f1       3 minutes ago        Running             controller                               0                   da4b070feb7d3       ingress-nginx-controller-6d9bd977d4-2kjj4
	0580435eb2d06       ee6d597e62dc8       3 minutes ago        Running             csi-snapshotter                          0                   c182cb071ab0e       csi-hostpathplugin-qbrtc
	f0d354e346d09       642ded511e141       3 minutes ago        Running             csi-provisioner                          0                   c182cb071ab0e       csi-hostpathplugin-qbrtc
	ea1384ae7dd0f       922312104da8a       3 minutes ago        Running             liveness-probe                           0                   c182cb071ab0e       csi-hostpathplugin-qbrtc
	fd1515e3b6598       08f6b2990811a       4 minutes ago        Running             hostpath                                 0                   c182cb071ab0e       csi-hostpathplugin-qbrtc
	2affd98a87fb7       0107d56dbc0be       4 minutes ago        Running             node-driver-registrar                    0                   c182cb071ab0e       csi-hostpathplugin-qbrtc
	b4500f1df2110       8b46b1cd48760       4 minutes ago        Running             admission                                0                   cb97b53211c08       volcano-admission-5f7844f7bc-2zrfw
	c6bca1c80f997       487fa743e1e22       4 minutes ago        Running             csi-resizer                              0                   a91bf58ebcf0b       csi-hostpath-resizer-0
	11ceb8091fa94       9a80d518f102c       4 minutes ago        Running             csi-attacher                             0                   1b5bea99f32e1       csi-hostpath-attacher-0
	adc85e5dd5ae4       1505f556b3a7b       4 minutes ago        Running             volcano-controllers                      0                   e2016faa9db87       volcano-controllers-59cb4746db-z85mm
	801b408614fb4       1461903ec4fe9       4 minutes ago        Running             csi-external-health-monitor-controller   0                   c182cb071ab0e       csi-hostpathplugin-qbrtc
	4baa08da481d7       d9c7ad4c226bf       4 minutes ago        Running             volcano-scheduler                        0                   f4575abbce170       volcano-scheduler-844f6db89b-lrnvd
	689f6e3f6ebe1       3410e1561990a       4 minutes ago        Running             registry-proxy                           0                   19a7b42d7d8a8       registry-proxy-72wcs
	cca27c07262e7       296b5f799fcd8       4 minutes ago        Exited              patch                                    0                   eaac20e440e43       ingress-nginx-admission-patch-8ql5j
	fc059754fdc6a       296b5f799fcd8       4 minutes ago        Exited              create                                   0                   cbfab4ec2fa00       ingress-nginx-admission-create-w226f
	b18e298598fd6       95dccb4df54ab       4 minutes ago        Running             metrics-server                           0                   ae40f3e3f9872       metrics-server-c59844bb4-wtmps
	aa16e9c2766f5       4d1e5c3e97420       4 minutes ago        Running             volume-snapshot-controller               0                   de1a5056b1d6d       snapshot-controller-745499f584-h9r9l
	7eeb8c4387b43       7ce2150c8929b       4 minutes ago        Running             local-path-provisioner                   0                   6b749d165fb7b       local-path-provisioner-8d985888d-74w5s
	c1012a7f0b196       77bdba588b953       4 minutes ago        Running             yakd                                     0                   d431c9f2322fd       yakd-dashboard-799879c74f-p24kq
	5a1a125ffacda       40bd730847e7e       4 minutes ago        Running             registry                                 0                   04e44db29fbbb       registry-656c9c8d9c-4z48h
	c2d059379ffee       8f3fc47ac1fb3       4 minutes ago        Running             cloud-spanner-emulator                   0                   6497a4523e886       cloud-spanner-emulator-6fcd4f6f98-hbmgp
	9492fcb96ca7c       4d1e5c3e97420       4 minutes ago        Running             volume-snapshot-controller               0                   c08ac65fb5f18       snapshot-controller-745499f584-l57rs
	916da36ded1b1       b644f4c9bf9c7       4 minutes ago        Running             nvidia-device-plugin-ctr                 0                   dda5a99e70c33       nvidia-device-plugin-daemonset-djkh5
	3b024b125521e       2437cf7621777       4 minutes ago        Running             coredns                                  0                   0e10eba997481       coredns-7db6d8ff4d-zpw4j
	e95b3a725655c       35508c2f890c4       4 minutes ago        Running             minikube-ingress-dns                     0                   e63464caf6bf4       kube-ingress-dns-minikube
	d0cb166c74586       ba04bb24b9575       4 minutes ago        Running             storage-provisioner                      0                   773fb13350627       storage-provisioner
	781b3d3e5994c       f42786f8afd22       4 minutes ago        Running             kindnet-cni                              0                   a798bd84a5c1e       kindnet-f6x9v
	8aa7ead9f794c       2351f570ed0ea       4 minutes ago        Running             kube-proxy                               0                   747db37280a85       kube-proxy-ffxlq
	b38dc11f34849       61773190d42ff       5 minutes ago        Running             kube-apiserver                           0                   94510bf7a156e       kube-apiserver-addons-299185
	0cb1405641b1e       8e97cdb19e7cc       5 minutes ago        Running             kube-controller-manager                  0                   bf0f00b0d6bde       kube-controller-manager-addons-299185
	868528cc1d3fc       014faa467e297       5 minutes ago        Running             etcd                                     0                   bed84e23357e8       etcd-addons-299185
	b11cf85eca732       d48f992a22722       5 minutes ago        Running             kube-scheduler                           0                   1cf37d6d8ecf0       kube-scheduler-addons-299185
	
	
	==> containerd <==
	Jul 29 10:27:38 addons-299185 containerd[812]: time="2024-07-29T10:27:38.084183405Z" level=info msg="StopPodSandbox for \"03c0439539698557be00cbc61d7e9670a5b3bb0ca5a1bcde5c8a1c90f989ddfb\""
	Jul 29 10:27:38 addons-299185 containerd[812]: time="2024-07-29T10:27:38.092747990Z" level=info msg="TearDown network for sandbox \"03c0439539698557be00cbc61d7e9670a5b3bb0ca5a1bcde5c8a1c90f989ddfb\" successfully"
	Jul 29 10:27:38 addons-299185 containerd[812]: time="2024-07-29T10:27:38.092936469Z" level=info msg="StopPodSandbox for \"03c0439539698557be00cbc61d7e9670a5b3bb0ca5a1bcde5c8a1c90f989ddfb\" returns successfully"
	Jul 29 10:27:38 addons-299185 containerd[812]: time="2024-07-29T10:27:38.093534793Z" level=info msg="RemovePodSandbox for \"03c0439539698557be00cbc61d7e9670a5b3bb0ca5a1bcde5c8a1c90f989ddfb\""
	Jul 29 10:27:38 addons-299185 containerd[812]: time="2024-07-29T10:27:38.093768729Z" level=info msg="Forcibly stopping sandbox \"03c0439539698557be00cbc61d7e9670a5b3bb0ca5a1bcde5c8a1c90f989ddfb\""
	Jul 29 10:27:38 addons-299185 containerd[812]: time="2024-07-29T10:27:38.101794042Z" level=info msg="TearDown network for sandbox \"03c0439539698557be00cbc61d7e9670a5b3bb0ca5a1bcde5c8a1c90f989ddfb\" successfully"
	Jul 29 10:27:38 addons-299185 containerd[812]: time="2024-07-29T10:27:38.108418155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03c0439539698557be00cbc61d7e9670a5b3bb0ca5a1bcde5c8a1c90f989ddfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Jul 29 10:27:38 addons-299185 containerd[812]: time="2024-07-29T10:27:38.108539115Z" level=info msg="RemovePodSandbox \"03c0439539698557be00cbc61d7e9670a5b3bb0ca5a1bcde5c8a1c90f989ddfb\" returns successfully"
	Jul 29 10:28:23 addons-299185 containerd[812]: time="2024-07-29T10:28:23.943252150Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\""
	Jul 29 10:28:24 addons-299185 containerd[812]: time="2024-07-29T10:28:24.082439781Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Jul 29 10:28:24 addons-299185 containerd[812]: time="2024-07-29T10:28:24.084366559Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734: active requests=0, bytes read=89"
	Jul 29 10:28:24 addons-299185 containerd[812]: time="2024-07-29T10:28:24.088218326Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\" with image id \"sha256:d1ca868ab82aa865a5f7b689c320359f3e31172de7b93dd0107fe2e49e617eeb\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\", size \"73046218\" in 144.901264ms"
	Jul 29 10:28:24 addons-299185 containerd[812]: time="2024-07-29T10:28:24.088273858Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.30.0@sha256:bda802dd37a41ba160bf10134538fd1a1ce05efcc14ab4c38b5f6b1e6dccd734\" returns image reference \"sha256:d1ca868ab82aa865a5f7b689c320359f3e31172de7b93dd0107fe2e49e617eeb\""
	Jul 29 10:28:24 addons-299185 containerd[812]: time="2024-07-29T10:28:24.091031615Z" level=info msg="CreateContainer within sandbox \"450304cdc8805620fbdd04f6db868c4eb226f2eab4b18953dfad82870c606e57\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Jul 29 10:28:24 addons-299185 containerd[812]: time="2024-07-29T10:28:24.113067293Z" level=info msg="CreateContainer within sandbox \"450304cdc8805620fbdd04f6db868c4eb226f2eab4b18953dfad82870c606e57\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3\""
	Jul 29 10:28:24 addons-299185 containerd[812]: time="2024-07-29T10:28:24.113921616Z" level=info msg="StartContainer for \"585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3\""
	Jul 29 10:28:24 addons-299185 containerd[812]: time="2024-07-29T10:28:24.178477051Z" level=info msg="StartContainer for \"585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3\" returns successfully"
	Jul 29 10:28:25 addons-299185 containerd[812]: time="2024-07-29T10:28:25.225219515Z" level=error msg="ExecSync for \"585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3\" failed" error="failed to exec in container: failed to start exec \"a445ed6ab0ad7ea99b6cb31f0fc32e7d6656b688397cbc612cac27d2fd71c6a4\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown"
	Jul 29 10:28:25 addons-299185 containerd[812]: time="2024-07-29T10:28:25.241387188Z" level=error msg="ExecSync for \"585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3\" failed" error="failed to exec in container: failed to start exec \"9dc8179a403f28713194de7060271b94d083ac88f914415dee8395b5181bf85d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Jul 29 10:28:25 addons-299185 containerd[812]: time="2024-07-29T10:28:25.251922908Z" level=error msg="ExecSync for \"585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3\" failed" error="failed to exec in container: failed to start exec \"03c6f421d5f999f62e7101ae4884ecb7636b921454bac63331432d8ad2114e1c\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown"
	Jul 29 10:28:25 addons-299185 containerd[812]: time="2024-07-29T10:28:25.369188312Z" level=info msg="shim disconnected" id=585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3 namespace=k8s.io
	Jul 29 10:28:25 addons-299185 containerd[812]: time="2024-07-29T10:28:25.369299868Z" level=warning msg="cleaning up after shim disconnected" id=585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3 namespace=k8s.io
	Jul 29 10:28:25 addons-299185 containerd[812]: time="2024-07-29T10:28:25.369314752Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 29 10:28:26 addons-299185 containerd[812]: time="2024-07-29T10:28:26.165275479Z" level=info msg="RemoveContainer for \"b40345445bf8deda3e3c1b09e81c2b92302005a4d9ee9de39d6198dff5ea8939\""
	Jul 29 10:28:26 addons-299185 containerd[812]: time="2024-07-29T10:28:26.172808265Z" level=info msg="RemoveContainer for \"b40345445bf8deda3e3c1b09e81c2b92302005a4d9ee9de39d6198dff5ea8939\" returns successfully"
	
	
	==> coredns [3b024b125521ed9bd6212c4956530dbaaf3c8d1f77927dfab2d1bba3cb922092] <==
	[INFO] 10.244.0.13:40784 - 25177 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062367s
	[INFO] 10.244.0.13:54616 - 46051 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002993277s
	[INFO] 10.244.0.13:54616 - 62433 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003345464s
	[INFO] 10.244.0.13:38097 - 4896 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000164971s
	[INFO] 10.244.0.13:38097 - 13347 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101194s
	[INFO] 10.244.0.13:56003 - 60680 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000203552s
	[INFO] 10.244.0.13:56003 - 44815 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000156233s
	[INFO] 10.244.0.13:33502 - 52554 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000128951s
	[INFO] 10.244.0.13:33502 - 47428 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050929s
	[INFO] 10.244.0.13:45320 - 35765 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105099s
	[INFO] 10.244.0.13:45320 - 41395 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000090707s
	[INFO] 10.244.0.13:45595 - 12432 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001888723s
	[INFO] 10.244.0.13:45595 - 52627 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001818881s
	[INFO] 10.244.0.13:40998 - 29584 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000066109s
	[INFO] 10.244.0.13:40998 - 61068 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000197275s
	[INFO] 10.244.0.24:47594 - 21311 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000144032s
	[INFO] 10.244.0.24:43034 - 46588 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000105001s
	[INFO] 10.244.0.24:34113 - 23404 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000087925s
	[INFO] 10.244.0.24:50937 - 60789 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000081108s
	[INFO] 10.244.0.24:54642 - 28934 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000079499s
	[INFO] 10.244.0.24:45861 - 34476 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000085144s
	[INFO] 10.244.0.24:40018 - 40240 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002294333s
	[INFO] 10.244.0.24:58475 - 23367 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.001967918s
	[INFO] 10.244.0.24:59235 - 27044 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000779468s
	[INFO] 10.244.0.24:57949 - 13326 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.000746032s
	
	
	==> describe nodes <==
	Name:               addons-299185
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-299185
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=addons-299185
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T10_24_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-299185
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-299185"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 10:24:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-299185
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 10:29:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 10:26:40 +0000   Mon, 29 Jul 2024 10:24:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 10:26:40 +0000   Mon, 29 Jul 2024 10:24:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 10:26:40 +0000   Mon, 29 Jul 2024 10:24:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 10:26:40 +0000   Mon, 29 Jul 2024 10:24:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-299185
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 584fc3bd321d4cb98492c6ad1cb3eba2
	  System UUID:                94e04054-2918-4027-836f-2f54b018ddec
	  Boot ID:                    9d805461-0494-4168-a7a3-1fdbd78d16da
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-hbmgp      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m54s
	  gadget                      gadget-hnltc                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  gcp-auth                    gcp-auth-5db96cd9b4-nwk2b                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m22s
	  ingress-nginx               ingress-nginx-controller-6d9bd977d4-2kjj4    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (1%!)(MISSING)        0 (0%!)(MISSING)         4m49s
	  kube-system                 coredns-7db6d8ff4d-zpw4j                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     4m58s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 csi-hostpathplugin-qbrtc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m47s
	  kube-system                 etcd-addons-299185                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         5m11s
	  kube-system                 kindnet-f6x9v                                100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      4m58s
	  kube-system                 kube-apiserver-addons-299185                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-controller-manager-addons-299185        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  kube-system                 kube-proxy-ffxlq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-scheduler-addons-299185                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	  kube-system                 metrics-server-c59844bb4-wtmps               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m52s
	  kube-system                 nvidia-device-plugin-daemonset-djkh5         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 registry-656c9c8d9c-4z48h                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 registry-proxy-72wcs                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m52s
	  kube-system                 snapshot-controller-745499f584-h9r9l         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 snapshot-controller-745499f584-l57rs         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  local-path-storage          local-path-provisioner-8d985888d-74w5s       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m53s
	  volcano-system              volcano-admission-5f7844f7bc-2zrfw           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  volcano-system              volcano-controllers-59cb4746db-z85mm         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  volcano-system              volcano-scheduler-844f6db89b-lrnvd           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m48s
	  yakd-dashboard              yakd-dashboard-799879c74f-p24kq              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (1%!)(MISSING)       256Mi (3%!)(MISSING)     4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%!)(MISSING)  100m (5%!)(MISSING)
	  memory             638Mi (8%!)(MISSING)   476Mi (6%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m56s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m18s (x8 over 5m18s)  kubelet          Node addons-299185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m18s (x8 over 5m18s)  kubelet          Node addons-299185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x7 over 5m18s)  kubelet          Node addons-299185 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m18s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 5m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m11s                  kubelet          Node addons-299185 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m11s                  kubelet          Node addons-299185 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m11s                  kubelet          Node addons-299185 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             5m11s                  kubelet          Node addons-299185 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  5m11s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m11s                  kubelet          Node addons-299185 status is now: NodeReady
	  Normal  RegisteredNode           4m59s                  node-controller  Node addons-299185 event: Registered Node addons-299185 in Controller
	
	
	==> dmesg <==
	[  +0.001074] FS-Cache: O-key=[8] 'a64a5c0100000000'
	[  +0.000746] FS-Cache: N-cookie c=000000ae [p=000000a5 fl=2 nc=0 na=1]
	[  +0.001019] FS-Cache: N-cookie d=00000000246b9f20{9p.inode} n=000000004adf4744
	[  +0.001073] FS-Cache: N-key=[8] 'a64a5c0100000000'
	[  +0.003218] FS-Cache: Duplicate cookie detected
	[  +0.000677] FS-Cache: O-cookie c=000000a8 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.001080] FS-Cache: O-cookie d=00000000246b9f20{9p.inode} n=000000004f445661
	[  +0.001155] FS-Cache: O-key=[8] 'a64a5c0100000000'
	[  +0.000726] FS-Cache: N-cookie c=000000af [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000929] FS-Cache: N-cookie d=00000000246b9f20{9p.inode} n=000000005f9f23c3
	[  +0.001029] FS-Cache: N-key=[8] 'a64a5c0100000000'
	[  +2.571544] FS-Cache: Duplicate cookie detected
	[  +0.000718] FS-Cache: O-cookie c=000000a6 [p=000000a5 fl=226 nc=0 na=1]
	[  +0.001057] FS-Cache: O-cookie d=00000000246b9f20{9p.inode} n=00000000524efe27
	[  +0.001123] FS-Cache: O-key=[8] 'a54a5c0100000000'
	[  +0.000734] FS-Cache: N-cookie c=000000b1 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000973] FS-Cache: N-cookie d=00000000246b9f20{9p.inode} n=00000000da8030ac
	[  +0.001194] FS-Cache: N-key=[8] 'a54a5c0100000000'
	[  +0.306818] FS-Cache: Duplicate cookie detected
	[  +0.000927] FS-Cache: O-cookie c=000000ab [p=000000a5 fl=226 nc=0 na=1]
	[  +0.001058] FS-Cache: O-cookie d=00000000246b9f20{9p.inode} n=000000009c9c1813
	[  +0.001041] FS-Cache: O-key=[8] 'ab4a5c0100000000'
	[  +0.000701] FS-Cache: N-cookie c=000000b2 [p=000000a5 fl=2 nc=0 na=1]
	[  +0.000922] FS-Cache: N-cookie d=00000000246b9f20{9p.inode} n=000000004adf4744
	[  +0.001029] FS-Cache: N-key=[8] 'ab4a5c0100000000'
	
	
	==> etcd [868528cc1d3fcd9f84ea22f26f5ca65811decd142df3905704d2554cef7b354d] <==
	{"level":"info","ts":"2024-07-29T10:24:32.227163Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-07-29T10:24:32.227332Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-07-29T10:24:32.251159Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-29T10:24:32.251775Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-29T10:24:32.251826Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-29T10:24:32.251525Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-29T10:24:32.251858Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-07-29T10:24:32.807851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-07-29T10:24:32.807908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-07-29T10:24:32.807925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-07-29T10:24:32.807953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-07-29T10:24:32.808003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-29T10:24:32.808037Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-07-29T10:24:32.808088Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-07-29T10:24:32.811929Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:24:32.816042Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-299185 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-29T10:24:32.816282Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T10:24:32.816635Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:24:32.816749Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:24:32.816822Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-29T10:24:32.816901Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-29T10:24:32.816947Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-29T10:24:32.816974Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-29T10:24:32.824876Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-29T10:24:32.827015Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [f8818941fbb0dd7e9f2117dbded2d8b8f1d14060878e92512316c6440367234e] <==
	2024/07/29 10:26:29 GCP Auth Webhook started!
	2024/07/29 10:26:47 Ready to marshal response ...
	2024/07/29 10:26:47 Ready to write response ...
	2024/07/29 10:26:47 Ready to marshal response ...
	2024/07/29 10:26:47 Ready to write response ...
	
	
	==> kernel <==
	 10:29:49 up 18:12,  0 users,  load average: 0.63, 1.65, 2.29
	Linux addons-299185 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [781b3d3e5994cbfbd26b4ad33f09246300a02e64c4eed56f30e28f93c31de11a] <==
	E0729 10:28:29.160085       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0729 10:28:33.335748       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0729 10:28:33.335835       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0729 10:28:34.728989       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0729 10:28:34.729028       1 main.go:299] handling current node
	I0729 10:28:44.729406       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0729 10:28:44.729441       1 main.go:299] handling current node
	I0729 10:28:54.729552       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0729 10:28:54.729590       1 main.go:299] handling current node
	W0729 10:28:57.951517       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 10:28:57.951549       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0729 10:29:04.729119       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0729 10:29:04.729157       1 main.go:299] handling current node
	W0729 10:29:13.649305       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0729 10:29:13.649349       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0729 10:29:14.729068       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0729 10:29:14.729106       1 main.go:299] handling current node
	W0729 10:29:22.468141       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0729 10:29:22.468177       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0729 10:29:24.729662       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0729 10:29:24.729697       1 main.go:299] handling current node
	I0729 10:29:34.729360       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0729 10:29:34.729397       1 main.go:299] handling current node
	I0729 10:29:44.729036       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0729 10:29:44.729074       1 main.go:299] handling current node
	
	
	==> kube-apiserver [b38dc11f34849a9830aa4d1dcb1262f9c97018a3702c8e25d75cfaca3b9a54d2] <==
	I0729 10:25:38.212010       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0729 10:25:38.902829       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:39.985070       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:41.031525       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:42.134439       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:43.193120       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:44.249925       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:45.256732       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:46.278906       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.38.106:443: connect: connection refused
	E0729 10:25:46.278950       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.38.106:443: connect: connection refused
	W0729 10:25:46.279354       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:46.280522       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:47.303429       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:48.390800       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:49.431059       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:50.513866       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:25:51.574339       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.97.192.59:443: connect: connection refused
	W0729 10:26:07.254588       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.38.106:443: connect: connection refused
	E0729 10:26:07.254643       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.38.106:443: connect: connection refused
	W0729 10:26:07.331025       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.38.106:443: connect: connection refused
	E0729 10:26:07.331072       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.38.106:443: connect: connection refused
	W0729 10:26:27.243118       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.38.106:443: connect: connection refused
	E0729 10:26:27.243158       1 dispatcher.go:214] failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.111.38.106:443: connect: connection refused
	I0729 10:26:47.118946       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0729 10:26:47.152557       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [0cb1405641b1e47232aa238c87beadcc3bfbbaeb17471e92b20b59045c4fb0d8] <==
	I0729 10:26:10.815112       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0729 10:26:10.896166       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0729 10:26:11.786289       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0729 10:26:11.798839       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0729 10:26:11.832835       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0729 10:26:11.843613       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0729 10:26:11.850576       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0729 10:26:11.894727       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0729 10:26:12.792703       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0729 10:26:12.795273       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0729 10:26:12.806324       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0729 10:26:12.810724       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0729 10:26:24.439091       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="17.654501ms"
	I0729 10:26:24.439375       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-6d9bd977d4" duration="73.419µs"
	I0729 10:26:27.276303       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="35.852476ms"
	I0729 10:26:27.292050       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="15.515958ms"
	I0729 10:26:27.294470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="95.63µs"
	I0729 10:26:27.295396       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="42.765µs"
	I0729 10:26:29.860177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="9.231429ms"
	I0729 10:26:29.860558       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="62.728µs"
	I0729 10:26:41.020135       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0729 10:26:41.054852       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0729 10:26:42.021696       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0729 10:26:42.073778       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0729 10:26:46.879667       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init"
	
	
	==> kube-proxy [8aa7ead9f794c54a4693911594ac00a9add326eea7d21da8f7f78ad78a23d0d9] <==
	I0729 10:24:52.725125       1 server_linux.go:69] "Using iptables proxy"
	I0729 10:24:52.739686       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0729 10:24:52.857164       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0729 10:24:52.857220       1 server_linux.go:165] "Using iptables Proxier"
	I0729 10:24:52.863599       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0729 10:24:52.863628       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0729 10:24:52.863649       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0729 10:24:52.863904       1 server.go:872] "Version info" version="v1.30.3"
	I0729 10:24:52.863919       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:24:52.865195       1 config.go:192] "Starting service config controller"
	I0729 10:24:52.865212       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0729 10:24:52.865262       1 config.go:101] "Starting endpoint slice config controller"
	I0729 10:24:52.865267       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0729 10:24:52.867753       1 config.go:319] "Starting node config controller"
	I0729 10:24:52.867767       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0729 10:24:52.966104       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0729 10:24:52.966178       1 shared_informer.go:320] Caches are synced for service config
	I0729 10:24:52.968090       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b11cf85eca7323b8a02ee74c75ead6625cbb69a2b18494fa6e7ad3c5b157a667] <==
	I0729 10:24:34.732246       1 serving.go:380] Generated self-signed cert in-memory
	W0729 10:24:37.103940       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 10:24:37.103972       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 10:24:37.103983       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 10:24:37.103990       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 10:24:37.131130       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0729 10:24:37.131163       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0729 10:24:37.132866       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0729 10:24:37.132894       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 10:24:37.133554       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0729 10:24:37.133723       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0729 10:24:37.136176       1 reflector.go:547] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 10:24:37.136224       1 reflector.go:150] runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0729 10:24:38.633328       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 29 10:28:25 addons-299185 kubelet[1553]: E0729 10:28:25.226294    1553 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"a445ed6ab0ad7ea99b6cb31f0fc32e7d6656b688397cbc612cac27d2fd71c6a4\": OCI runtime exec failed: exec failed: unable to start container process: error executing setns process: exit status 1: unknown" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3" cmd=["/bin/gadgettracermanager","-liveness"]
	Jul 29 10:28:25 addons-299185 kubelet[1553]: E0729 10:28:25.241671    1553 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"9dc8179a403f28713194de7060271b94d083ac88f914415dee8395b5181bf85d\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3" cmd=["/bin/gadgettracermanager","-liveness"]
	Jul 29 10:28:25 addons-299185 kubelet[1553]: E0729 10:28:25.252160    1553 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to start exec \"03c6f421d5f999f62e7101ae4884ecb7636b921454bac63331432d8ad2114e1c\": OCI runtime exec failed: exec failed: cannot exec in a stopped container: unknown" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3" cmd=["/bin/gadgettracermanager","-liveness"]
	Jul 29 10:28:26 addons-299185 kubelet[1553]: I0729 10:28:26.162524    1553 scope.go:117] "RemoveContainer" containerID="b40345445bf8deda3e3c1b09e81c2b92302005a4d9ee9de39d6198dff5ea8939"
	Jul 29 10:28:26 addons-299185 kubelet[1553]: I0729 10:28:26.162967    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:28:26 addons-299185 kubelet[1553]: E0729 10:28:26.163409    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	Jul 29 10:28:27 addons-299185 kubelet[1553]: I0729 10:28:27.166950    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:28:27 addons-299185 kubelet[1553]: E0729 10:28:27.167984    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	Jul 29 10:28:28 addons-299185 kubelet[1553]: I0729 10:28:28.834731    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:28:28 addons-299185 kubelet[1553]: E0729 10:28:28.835234    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	Jul 29 10:28:43 addons-299185 kubelet[1553]: I0729 10:28:43.940156    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:28:43 addons-299185 kubelet[1553]: E0729 10:28:43.940650    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	Jul 29 10:28:56 addons-299185 kubelet[1553]: I0729 10:28:56.939647    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:28:56 addons-299185 kubelet[1553]: E0729 10:28:56.940178    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	Jul 29 10:29:10 addons-299185 kubelet[1553]: I0729 10:29:10.940005    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:29:10 addons-299185 kubelet[1553]: E0729 10:29:10.940546    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	Jul 29 10:29:11 addons-299185 kubelet[1553]: I0729 10:29:11.940167    1553 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-djkh5" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 10:29:11 addons-299185 kubelet[1553]: I0729 10:29:11.941138    1553 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-656c9c8d9c-4z48h" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 10:29:24 addons-299185 kubelet[1553]: I0729 10:29:24.939338    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:29:24 addons-299185 kubelet[1553]: E0729 10:29:24.939951    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	Jul 29 10:29:25 addons-299185 kubelet[1553]: I0729 10:29:25.940325    1553 kubelet_pods.go:988] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-72wcs" secret="" err="secret \"gcp-auth\" not found"
	Jul 29 10:29:36 addons-299185 kubelet[1553]: I0729 10:29:36.940289    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:29:36 addons-299185 kubelet[1553]: E0729 10:29:36.940828    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	Jul 29 10:29:48 addons-299185 kubelet[1553]: I0729 10:29:48.939414    1553 scope.go:117] "RemoveContainer" containerID="585f52c7d188a92885e957983a7a1cbaddc0e0be55ab113eec42ff339a03a3a3"
	Jul 29 10:29:48 addons-299185 kubelet[1553]: E0729 10:29:48.940447    1553 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-hnltc_gadget(bd63f80e-7f09-4013-9880-fec9404b8fdb)\"" pod="gadget/gadget-hnltc" podUID="bd63f80e-7f09-4013-9880-fec9404b8fdb"
	
	
	==> storage-provisioner [d0cb166c745866c89ebfb67b383ed446744b2a9cdabf40e58e7c8926f4b48c49] <==
	I0729 10:24:57.237966       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 10:24:57.345429       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 10:24:57.345477       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 10:24:57.362485       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 10:24:57.363963       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ecaa55e8-038f-4543-bd44-cb377ae979ff", APIVersion:"v1", ResourceVersion:"543", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-299185_8e05e81b-cbe0-43aa-97a1-b6d9b4ec1c8d became leader
	I0729 10:24:57.364113       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-299185_8e05e81b-cbe0-43aa-97a1-b6d9b4ec1c8d!
	I0729 10:24:57.464673       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-299185_8e05e81b-cbe0-43aa-97a1-b6d9b4ec1c8d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-299185 -n addons-299185
helpers_test.go:261: (dbg) Run:  kubectl --context addons-299185 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-w226f ingress-nginx-admission-patch-8ql5j test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-299185 describe pod ingress-nginx-admission-create-w226f ingress-nginx-admission-patch-8ql5j test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-299185 describe pod ingress-nginx-admission-create-w226f ingress-nginx-admission-patch-8ql5j test-job-nginx-0: exit status 1 (77.795556ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-w226f" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8ql5j" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-299185 describe pod ingress-nginx-admission-create-w226f ingress-nginx-admission-patch-8ql5j test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (385.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-398652 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0729 11:14:10.310293 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-398652 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 102 (6m21.515554093s)

                                                
                                                
-- stdout --
	* [old-k8s-version-398652] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-398652" primary control-plane node in "old-k8s-version-398652" cluster
	* Pulling base image v0.0.44-1721902582-19326 ...
	* Restarting existing docker container for "old-k8s-version-398652" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-398652 addons enable metrics-server
	
	* Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:13:59.620664 3116606 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:13:59.620895 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:13:59.620922 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:13:59.620943 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:13:59.621201 3116606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 11:13:59.621585 3116606 out.go:298] Setting JSON to false
	I0729 11:13:59.622595 3116606 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":68190,"bootTime":1722183450,"procs":218,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 11:13:59.622684 3116606 start.go:139] virtualization:  
	I0729 11:13:59.626859 3116606 out.go:177] * [old-k8s-version-398652] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0729 11:13:59.629103 3116606 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:13:59.629168 3116606 notify.go:220] Checking for updates...
	I0729 11:13:59.634250 3116606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:13:59.636430 3116606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 11:13:59.638534 3116606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 11:13:59.640548 3116606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0729 11:13:59.642639 3116606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:13:59.644975 3116606 config.go:182] Loaded profile config "old-k8s-version-398652": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0729 11:13:59.647561 3116606 out.go:177] * Kubernetes 1.30.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.30.3
	I0729 11:13:59.649522 3116606 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:13:59.677850 3116606 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 11:13:59.677961 3116606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:13:59.795748 3116606 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2024-07-29 11:13:59.785607419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 11:13:59.795924 3116606 docker.go:307] overlay module found
	I0729 11:13:59.798504 3116606 out.go:177] * Using the docker driver based on existing profile
	I0729 11:13:59.800523 3116606 start.go:297] selected driver: docker
	I0729 11:13:59.800545 3116606 start.go:901] validating driver "docker" against &{Name:old-k8s-version-398652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-398652 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:13:59.800667 3116606 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:13:59.801281 3116606 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:13:59.887680 3116606 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:68 SystemTime:2024-07-29 11:13:59.878355619 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 11:13:59.888088 3116606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:13:59.888120 3116606 cni.go:84] Creating CNI manager for ""
	I0729 11:13:59.888128 3116606 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 11:13:59.888182 3116606 start.go:340] cluster config:
	{Name:old-k8s-version-398652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-398652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:13:59.891218 3116606 out.go:177] * Starting "old-k8s-version-398652" primary control-plane node in "old-k8s-version-398652" cluster
	I0729 11:13:59.893275 3116606 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0729 11:13:59.895046 3116606 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 11:13:59.897114 3116606 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0729 11:13:59.897175 3116606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0729 11:13:59.897188 3116606 cache.go:56] Caching tarball of preloaded images
	I0729 11:13:59.897195 3116606 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 11:13:59.897290 3116606 preload.go:172] Found /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:13:59.897301 3116606 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0729 11:13:59.897418 3116606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/config.json ...
	W0729 11:13:59.921881 3116606 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 11:13:59.921906 3116606 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 11:13:59.921995 3116606 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 11:13:59.922013 3116606 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 11:13:59.922018 3116606 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 11:13:59.922028 3116606 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 11:13:59.922034 3116606 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 11:14:00.113757 3116606 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 11:14:00.113790 3116606 cache.go:194] Successfully downloaded all kic artifacts
	I0729 11:14:00.113845 3116606 start.go:360] acquireMachinesLock for old-k8s-version-398652: {Name:mk801454594f4e22f8a91474bf2c723192ef160a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:14:00.113937 3116606 start.go:364] duration metric: took 57.722µs to acquireMachinesLock for "old-k8s-version-398652"
	I0729 11:14:00.113961 3116606 start.go:96] Skipping create...Using existing machine configuration
	I0729 11:14:00.113967 3116606 fix.go:54] fixHost starting: 
	I0729 11:14:00.114419 3116606 cli_runner.go:164] Run: docker container inspect old-k8s-version-398652 --format={{.State.Status}}
	I0729 11:14:00.190673 3116606 fix.go:112] recreateIfNeeded on old-k8s-version-398652: state=Stopped err=<nil>
	W0729 11:14:00.190704 3116606 fix.go:138] unexpected machine state, will restart: <nil>
	I0729 11:14:00.194598 3116606 out.go:177] * Restarting existing docker container for "old-k8s-version-398652" ...
	I0729 11:14:00.196661 3116606 cli_runner.go:164] Run: docker start old-k8s-version-398652
	I0729 11:14:00.704726 3116606 cli_runner.go:164] Run: docker container inspect old-k8s-version-398652 --format={{.State.Status}}
	I0729 11:14:00.733388 3116606 kic.go:430] container "old-k8s-version-398652" state is running.
	I0729 11:14:00.733794 3116606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-398652
	I0729 11:14:00.761789 3116606 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/config.json ...
	I0729 11:14:00.762009 3116606 machine.go:94] provisionDockerMachine start ...
	I0729 11:14:00.762089 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:00.792297 3116606 main.go:141] libmachine: Using SSH client type: native
	I0729 11:14:00.792567 3116606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36764 <nil> <nil>}
	I0729 11:14:00.792580 3116606 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:14:00.793333 3116606 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I0729 11:14:03.959577 3116606 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-398652
	
	I0729 11:14:03.959653 3116606 ubuntu.go:169] provisioning hostname "old-k8s-version-398652"
	I0729 11:14:03.959746 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:03.996020 3116606 main.go:141] libmachine: Using SSH client type: native
	I0729 11:14:03.996271 3116606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36764 <nil> <nil>}
	I0729 11:14:03.996282 3116606 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-398652 && echo "old-k8s-version-398652" | sudo tee /etc/hostname
	I0729 11:14:04.178515 3116606 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-398652
	
	I0729 11:14:04.178704 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:04.207128 3116606 main.go:141] libmachine: Using SSH client type: native
	I0729 11:14:04.207373 3116606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36764 <nil> <nil>}
	I0729 11:14:04.207390 3116606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-398652' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-398652/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-398652' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:14:04.357018 3116606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:14:04.357047 3116606 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19337-2904404/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-2904404/.minikube}
	I0729 11:14:04.357077 3116606 ubuntu.go:177] setting up certificates
	I0729 11:14:04.357088 3116606 provision.go:84] configureAuth start
	I0729 11:14:04.357159 3116606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-398652
	I0729 11:14:04.390777 3116606 provision.go:143] copyHostCerts
	I0729 11:14:04.390861 3116606 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.pem, removing ...
	I0729 11:14:04.390876 3116606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.pem
	I0729 11:14:04.390955 3116606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.pem (1078 bytes)
	I0729 11:14:04.391053 3116606 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-2904404/.minikube/cert.pem, removing ...
	I0729 11:14:04.391064 3116606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-2904404/.minikube/cert.pem
	I0729 11:14:04.391090 3116606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/cert.pem (1123 bytes)
	I0729 11:14:04.391143 3116606 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-2904404/.minikube/key.pem, removing ...
	I0729 11:14:04.391152 3116606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-2904404/.minikube/key.pem
	I0729 11:14:04.391181 3116606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/key.pem (1675 bytes)
	I0729 11:14:04.391232 3116606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-398652 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-398652]
	I0729 11:14:04.854625 3116606 provision.go:177] copyRemoteCerts
	I0729 11:14:04.854708 3116606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:14:04.854754 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:04.877835 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:04.976334 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 11:14:05.015110 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0729 11:14:05.061965 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0729 11:14:05.095693 3116606 provision.go:87] duration metric: took 738.590476ms to configureAuth
	I0729 11:14:05.095724 3116606 ubuntu.go:193] setting minikube options for container-runtime
	I0729 11:14:05.095945 3116606 config.go:182] Loaded profile config "old-k8s-version-398652": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0729 11:14:05.095961 3116606 machine.go:97] duration metric: took 4.333934709s to provisionDockerMachine
	I0729 11:14:05.095971 3116606 start.go:293] postStartSetup for "old-k8s-version-398652" (driver="docker")
	I0729 11:14:05.095992 3116606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:14:05.096057 3116606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:14:05.096104 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:05.121333 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:05.226322 3116606 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:14:05.232460 3116606 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0729 11:14:05.232496 3116606 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0729 11:14:05.232508 3116606 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0729 11:14:05.232516 3116606 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0729 11:14:05.232527 3116606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-2904404/.minikube/addons for local assets ...
	I0729 11:14:05.232591 3116606 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-2904404/.minikube/files for local assets ...
	I0729 11:14:05.232682 3116606 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/ssl/certs/29097892.pem -> 29097892.pem in /etc/ssl/certs
	I0729 11:14:05.232797 3116606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:14:05.246538 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/ssl/certs/29097892.pem --> /etc/ssl/certs/29097892.pem (1708 bytes)
	I0729 11:14:05.277909 3116606 start.go:296] duration metric: took 181.923437ms for postStartSetup
	I0729 11:14:05.277995 3116606 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:14:05.278052 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:05.296185 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:05.404719 3116606 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:14:05.412682 3116606 fix.go:56] duration metric: took 5.298705553s for fixHost
	I0729 11:14:05.412705 3116606 start.go:83] releasing machines lock for "old-k8s-version-398652", held for 5.298759098s
	I0729 11:14:05.412777 3116606 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-398652
	I0729 11:14:05.441350 3116606 ssh_runner.go:195] Run: cat /version.json
	I0729 11:14:05.441415 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:05.441644 3116606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:14:05.441710 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:05.475958 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:05.478107 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:05.579300 3116606 ssh_runner.go:195] Run: systemctl --version
	I0729 11:14:05.743700 3116606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 11:14:05.750439 3116606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0729 11:14:05.784030 3116606 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0729 11:14:05.784182 3116606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:14:05.797270 3116606 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0729 11:14:05.797296 3116606 start.go:495] detecting cgroup driver to use...
	I0729 11:14:05.797331 3116606 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0729 11:14:05.797389 3116606 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 11:14:05.819950 3116606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 11:14:05.841228 3116606 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:14:05.841297 3116606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:14:05.864178 3116606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:14:05.879914 3116606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:14:06.045561 3116606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:14:06.210861 3116606 docker.go:233] disabling docker service ...
	I0729 11:14:06.210933 3116606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:14:06.227437 3116606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:14:06.248211 3116606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:14:06.416284 3116606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:14:06.550198 3116606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:14:06.563959 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:14:06.581615 3116606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0729 11:14:06.593291 3116606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 11:14:06.609792 3116606 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 11:14:06.609865 3116606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 11:14:06.624570 3116606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 11:14:06.644023 3116606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 11:14:06.658903 3116606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 11:14:06.675111 3116606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:14:06.693575 3116606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 11:14:06.708355 3116606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:14:06.727171 3116606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:14:06.737910 3116606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:14:06.893741 3116606 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 11:14:07.156192 3116606 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0729 11:14:07.156332 3116606 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0729 11:14:07.161820 3116606 start.go:563] Will wait 60s for crictl version
	I0729 11:14:07.161886 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:14:07.165692 3116606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:14:07.219826 3116606 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0729 11:14:07.219899 3116606 ssh_runner.go:195] Run: containerd --version
	I0729 11:14:07.252394 3116606 ssh_runner.go:195] Run: containerd --version
	I0729 11:14:07.289301 3116606 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.19 ...
	I0729 11:14:07.291361 3116606 cli_runner.go:164] Run: docker network inspect old-k8s-version-398652 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:14:07.321008 3116606 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0729 11:14:07.325055 3116606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:14:07.340369 3116606 kubeadm.go:883] updating cluster {Name:old-k8s-version-398652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-398652 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:14:07.340505 3116606 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0729 11:14:07.340563 3116606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:14:07.406595 3116606 containerd.go:627] all images are preloaded for containerd runtime.
	I0729 11:14:07.406616 3116606 containerd.go:534] Images already preloaded, skipping extraction
	I0729 11:14:07.406676 3116606 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:14:07.480670 3116606 containerd.go:627] all images are preloaded for containerd runtime.
	I0729 11:14:07.480697 3116606 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:14:07.480706 3116606 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0729 11:14:07.480862 3116606 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-398652 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-398652 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:14:07.480934 3116606 ssh_runner.go:195] Run: sudo crictl info
	I0729 11:14:07.562029 3116606 cni.go:84] Creating CNI manager for ""
	I0729 11:14:07.562100 3116606 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 11:14:07.562123 3116606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:14:07.562182 3116606 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-398652 NodeName:old-k8s-version-398652 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0729 11:14:07.562368 3116606 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-398652"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:14:07.562485 3116606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0729 11:14:07.573093 3116606 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:14:07.573158 3116606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:14:07.586319 3116606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0729 11:14:07.613327 3116606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:14:07.637994 3116606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0729 11:14:07.659588 3116606 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0729 11:14:07.663500 3116606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:14:07.674640 3116606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:14:07.808650 3116606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:14:07.826322 3116606 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652 for IP: 192.168.76.2
	I0729 11:14:07.826394 3116606 certs.go:194] generating shared ca certs ...
	I0729 11:14:07.826424 3116606 certs.go:226] acquiring lock for ca certs: {Name:mk2f7a1a044772cb2825bd46674f373ef156f656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:14:07.826610 3116606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.key
	I0729 11:14:07.826705 3116606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.key
	I0729 11:14:07.826733 3116606 certs.go:256] generating profile certs ...
	I0729 11:14:07.826873 3116606 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.key
	I0729 11:14:07.826983 3116606 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/apiserver.key.8e4f850b
	I0729 11:14:07.827059 3116606 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/proxy-client.key
	I0729 11:14:07.827215 3116606 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/2909789.pem (1338 bytes)
	W0729 11:14:07.827288 3116606 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/2909789_empty.pem, impossibly tiny 0 bytes
	I0729 11:14:07.827328 3116606 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:14:07.827378 3116606 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem (1078 bytes)
	I0729 11:14:07.827436 3116606 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:14:07.827493 3116606 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem (1675 bytes)
	I0729 11:14:07.827576 3116606 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/ssl/certs/29097892.pem (1708 bytes)
	I0729 11:14:07.828505 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:14:07.865933 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 11:14:07.905355 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:14:07.947114 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:14:07.976943 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0729 11:14:08.006875 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:14:08.039275 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:14:08.073478 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:14:08.111864 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:14:08.169965 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/2909789.pem --> /usr/share/ca-certificates/2909789.pem (1338 bytes)
	I0729 11:14:08.240538 3116606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/ssl/certs/29097892.pem --> /usr/share/ca-certificates/29097892.pem (1708 bytes)
	I0729 11:14:08.275410 3116606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:14:08.295136 3116606 ssh_runner.go:195] Run: openssl version
	I0729 11:14:08.301568 3116606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2909789.pem && ln -fs /usr/share/ca-certificates/2909789.pem /etc/ssl/certs/2909789.pem"
	I0729 11:14:08.312236 3116606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2909789.pem
	I0729 11:14:08.316319 3116606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:33 /usr/share/ca-certificates/2909789.pem
	I0729 11:14:08.316385 3116606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2909789.pem
	I0729 11:14:08.323685 3116606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2909789.pem /etc/ssl/certs/51391683.0"
	I0729 11:14:08.335604 3116606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29097892.pem && ln -fs /usr/share/ca-certificates/29097892.pem /etc/ssl/certs/29097892.pem"
	I0729 11:14:08.350073 3116606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29097892.pem
	I0729 11:14:08.354139 3116606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:33 /usr/share/ca-certificates/29097892.pem
	I0729 11:14:08.354205 3116606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29097892.pem
	I0729 11:14:08.361920 3116606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29097892.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:14:08.372433 3116606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:14:08.382883 3116606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:14:08.386920 3116606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:14:08.386989 3116606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:14:08.394611 3116606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:14:08.405491 3116606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:14:08.409542 3116606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0729 11:14:08.416805 3116606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0729 11:14:08.424042 3116606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0729 11:14:08.431291 3116606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0729 11:14:08.439916 3116606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0729 11:14:08.448378 3116606 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0729 11:14:08.456154 3116606 kubeadm.go:392] StartCluster: {Name:old-k8s-version-398652 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-398652 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:14:08.456253 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0729 11:14:08.456314 3116606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:14:08.498763 3116606 cri.go:89] found id: "d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:14:08.498786 3116606 cri.go:89] found id: "e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:14:08.498790 3116606 cri.go:89] found id: "c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:14:08.498794 3116606 cri.go:89] found id: "b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:14:08.498798 3116606 cri.go:89] found id: "587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:14:08.498801 3116606 cri.go:89] found id: "7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:14:08.498805 3116606 cri.go:89] found id: "8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:14:08.498808 3116606 cri.go:89] found id: "789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:14:08.498810 3116606 cri.go:89] found id: ""
	I0729 11:14:08.498862 3116606 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0729 11:14:08.511640 3116606 cri.go:116] JSON = null
	W0729 11:14:08.511694 3116606 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0729 11:14:08.511757 3116606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:14:08.521637 3116606 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0729 11:14:08.521660 3116606 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0729 11:14:08.521715 3116606 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0729 11:14:08.530499 3116606 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0729 11:14:08.530998 3116606 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-398652" does not appear in /home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 11:14:08.531115 3116606 kubeconfig.go:62] /home/jenkins/minikube-integration/19337-2904404/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-398652" cluster setting kubeconfig missing "old-k8s-version-398652" context setting]
	I0729 11:14:08.531394 3116606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/kubeconfig: {Name:mkeecad1fa513e831370425fbda0ceb7b2cb39f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:14:08.532704 3116606 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0729 11:14:08.542235 3116606 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0729 11:14:08.542271 3116606 kubeadm.go:597] duration metric: took 20.604067ms to restartPrimaryControlPlane
	I0729 11:14:08.542281 3116606 kubeadm.go:394] duration metric: took 86.136743ms to StartCluster
	I0729 11:14:08.542297 3116606 settings.go:142] acquiring lock: {Name:mk13aac0349b1bb0c6badbadf5082ad34f96b8fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:14:08.542363 3116606 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 11:14:08.543050 3116606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/kubeconfig: {Name:mkeecad1fa513e831370425fbda0ceb7b2cb39f1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:14:08.543251 3116606 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0729 11:14:08.543535 3116606 config.go:182] Loaded profile config "old-k8s-version-398652": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0729 11:14:08.543576 3116606 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0729 11:14:08.543672 3116606 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-398652"
	I0729 11:14:08.543705 3116606 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-398652"
	W0729 11:14:08.543716 3116606 addons.go:243] addon storage-provisioner should already be in state true
	I0729 11:14:08.543737 3116606 host.go:66] Checking if "old-k8s-version-398652" exists ...
	I0729 11:14:08.544314 3116606 addons.go:69] Setting dashboard=true in profile "old-k8s-version-398652"
	I0729 11:14:08.544353 3116606 addons.go:234] Setting addon dashboard=true in "old-k8s-version-398652"
	W0729 11:14:08.544368 3116606 addons.go:243] addon dashboard should already be in state true
	I0729 11:14:08.544393 3116606 host.go:66] Checking if "old-k8s-version-398652" exists ...
	I0729 11:14:08.544808 3116606 cli_runner.go:164] Run: docker container inspect old-k8s-version-398652 --format={{.State.Status}}
	I0729 11:14:08.545313 3116606 cli_runner.go:164] Run: docker container inspect old-k8s-version-398652 --format={{.State.Status}}
	I0729 11:14:08.545706 3116606 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-398652"
	I0729 11:14:08.545741 3116606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-398652"
	I0729 11:14:08.545820 3116606 out.go:177] * Verifying Kubernetes components...
	I0729 11:14:08.546015 3116606 cli_runner.go:164] Run: docker container inspect old-k8s-version-398652 --format={{.State.Status}}
	I0729 11:14:08.547754 3116606 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-398652"
	I0729 11:14:08.548104 3116606 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-398652"
	W0729 11:14:08.548179 3116606 addons.go:243] addon metrics-server should already be in state true
	I0729 11:14:08.548351 3116606 host.go:66] Checking if "old-k8s-version-398652" exists ...
	I0729 11:14:08.548302 3116606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:14:08.551107 3116606 cli_runner.go:164] Run: docker container inspect old-k8s-version-398652 --format={{.State.Status}}
	I0729 11:14:08.617133 3116606 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-398652"
	W0729 11:14:08.617165 3116606 addons.go:243] addon default-storageclass should already be in state true
	I0729 11:14:08.617192 3116606 host.go:66] Checking if "old-k8s-version-398652" exists ...
	I0729 11:14:08.617806 3116606 cli_runner.go:164] Run: docker container inspect old-k8s-version-398652 --format={{.State.Status}}
	I0729 11:14:08.624018 3116606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0729 11:14:08.628026 3116606 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:14:08.628048 3116606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0729 11:14:08.628111 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:08.634749 3116606 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0729 11:14:08.634792 3116606 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0729 11:14:08.637883 3116606 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0729 11:14:08.637910 3116606 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0729 11:14:08.637980 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:08.647846 3116606 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0729 11:14:08.650064 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0729 11:14:08.650097 3116606 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0729 11:14:08.650173 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:08.671621 3116606 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0729 11:14:08.671644 3116606 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0729 11:14:08.671712 3116606 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-398652
	I0729 11:14:08.697901 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:08.719916 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:08.722559 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:08.737609 3116606 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36764 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/old-k8s-version-398652/id_rsa Username:docker}
	I0729 11:14:08.757856 3116606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:14:08.790131 3116606 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-398652" to be "Ready" ...
	I0729 11:14:08.901486 3116606 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0729 11:14:08.901519 3116606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0729 11:14:08.928529 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:14:08.938581 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:14:08.981806 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0729 11:14:08.981879 3116606 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0729 11:14:08.988532 3116606 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0729 11:14:08.988612 3116606 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0729 11:14:09.065568 3116606 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:14:09.065646 3116606 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0729 11:14:09.075109 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0729 11:14:09.075192 3116606 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0729 11:14:09.180776 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0729 11:14:09.180853 3116606 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0729 11:14:09.210310 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:14:09.257324 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0729 11:14:09.257397 3116606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0729 11:14:09.338289 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.338318 3116606 retry.go:31] will retry after 256.188695ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0729 11:14:09.338357 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.338363 3116606 retry.go:31] will retry after 296.796096ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.383084 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0729 11:14:09.383109 3116606 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0729 11:14:09.418215 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0729 11:14:09.418240 3116606 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0729 11:14:09.421535 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.421561 3116606 retry.go:31] will retry after 370.307048ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.446269 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0729 11:14:09.446294 3116606 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0729 11:14:09.478252 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0729 11:14:09.478345 3116606 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0729 11:14:09.513879 3116606 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 11:14:09.513952 3116606 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0729 11:14:09.539820 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 11:14:09.594958 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:14:09.635629 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0729 11:14:09.673253 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.673335 3116606 retry.go:31] will retry after 193.655275ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.792209 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0729 11:14:09.851795 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.851888 3116606 retry.go:31] will retry after 205.589487ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.868205 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0729 11:14:09.909946 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:09.910025 3116606 retry.go:31] will retry after 467.614532ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0729 11:14:10.032695 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.032798 3116606 retry.go:31] will retry after 190.721042ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.057915 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0729 11:14:10.065922 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.066006 3116606 retry.go:31] will retry after 278.367295ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0729 11:14:10.181746 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.181829 3116606 retry.go:31] will retry after 506.614125ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.223974 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0729 11:14:10.341076 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.341157 3116606 retry.go:31] will retry after 489.904004ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.345485 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 11:14:10.377919 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0729 11:14:10.502171 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.502249 3116606 retry.go:31] will retry after 347.433975ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0729 11:14:10.549231 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.549325 3116606 retry.go:31] will retry after 658.66977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.689652 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:14:10.791443 3116606 node_ready.go:53] error getting node "old-k8s-version-398652": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-398652": dial tcp 192.168.76.2:8443: connect: connection refused
	W0729 11:14:10.801078 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.801157 3116606 retry.go:31] will retry after 1.095835156s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.831294 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:14:10.850790 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0729 11:14:10.944601 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:10.944632 3116606 retry.go:31] will retry after 581.141354ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0729 11:14:11.003195 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.003245 3116606 retry.go:31] will retry after 443.91807ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.208539 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0729 11:14:11.280608 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.280639 3116606 retry.go:31] will retry after 903.595613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.447635 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0729 11:14:11.516814 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.516869 3116606 retry.go:31] will retry after 916.855227ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.526000 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0729 11:14:11.611588 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.611620 3116606 retry.go:31] will retry after 839.113432ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.897686 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0729 11:14:11.975385 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:11.975425 3116606 retry.go:31] will retry after 1.079463876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:12.184759 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0729 11:14:12.273146 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:12.273178 3116606 retry.go:31] will retry after 967.964047ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:12.433981 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 11:14:12.451361 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0729 11:14:12.548988 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:12.549024 3116606 retry.go:31] will retry after 2.457780904s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0729 11:14:12.566159 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:12.566194 3116606 retry.go:31] will retry after 1.006292482s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:13.055146 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0729 11:14:13.140332 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:13.140364 3116606 retry.go:31] will retry after 1.044729047s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:13.241556 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:14:13.291179 3116606 node_ready.go:53] error getting node "old-k8s-version-398652": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-398652": dial tcp 192.168.76.2:8443: connect: connection refused
	W0729 11:14:13.313297 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:13.313328 3116606 retry.go:31] will retry after 1.345346207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:13.572680 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0729 11:14:13.704111 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:13.704187 3116606 retry.go:31] will retry after 1.611976097s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:14.185276 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0729 11:14:14.284963 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:14.284991 3116606 retry.go:31] will retry after 1.674586185s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:14.659184 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0729 11:14:14.789322 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:14.789352 3116606 retry.go:31] will retry after 3.259488579s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:15.007390 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0729 11:14:15.166217 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:15.166262 3116606 retry.go:31] will retry after 3.982764511s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:15.316429 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0729 11:14:15.453954 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:15.453984 3116606 retry.go:31] will retry after 5.341962675s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:15.791581 3116606 node_ready.go:53] error getting node "old-k8s-version-398652": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-398652": dial tcp 192.168.76.2:8443: connect: connection refused
	I0729 11:14:15.960144 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0729 11:14:16.124711 3116606 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:16.124741 3116606 retry.go:31] will retry after 3.971498971s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0729 11:14:18.049541 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0729 11:14:19.149837 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0729 11:14:20.096448 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0729 11:14:20.796866 3116606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0729 11:14:25.767071 3116606 node_ready.go:49] node "old-k8s-version-398652" has status "Ready":"True"
	I0729 11:14:25.767101 3116606 node_ready.go:38] duration metric: took 16.976937286s for node "old-k8s-version-398652" to be "Ready" ...
	I0729 11:14:25.767113 3116606 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:14:25.917968 3116606 pod_ready.go:78] waiting up to 6m0s for pod "coredns-74ff55c5b-tx9sc" in "kube-system" namespace to be "Ready" ...
	I0729 11:14:26.027581 3116606 pod_ready.go:92] pod "coredns-74ff55c5b-tx9sc" in "kube-system" namespace has status "Ready":"True"
	I0729 11:14:26.027657 3116606 pod_ready.go:81] duration metric: took 109.607933ms for pod "coredns-74ff55c5b-tx9sc" in "kube-system" namespace to be "Ready" ...
	I0729 11:14:26.027683 3116606 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-398652" in "kube-system" namespace to be "Ready" ...
	I0729 11:14:26.048273 3116606 pod_ready.go:92] pod "etcd-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"True"
	I0729 11:14:26.048359 3116606 pod_ready.go:81] duration metric: took 20.656186ms for pod "etcd-old-k8s-version-398652" in "kube-system" namespace to be "Ready" ...
	I0729 11:14:26.048391 3116606 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-398652" in "kube-system" namespace to be "Ready" ...
	I0729 11:14:26.794231 3116606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (8.74464107s)
	I0729 11:14:27.031298 3116606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.881410763s)
	I0729 11:14:27.031572 3116606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.935092099s)
	I0729 11:14:27.031700 3116606 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.234805581s)
	I0729 11:14:27.031754 3116606 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-398652"
	I0729 11:14:27.052187 3116606 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-398652 addons enable metrics-server
	
	I0729 11:14:27.068815 3116606 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0729 11:14:27.070965 3116606 addons.go:510] duration metric: took 18.527378583s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0729 11:14:28.058005 3116606 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:30.074545 3116606 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:32.556567 3116606 pod_ready.go:102] pod "kube-apiserver-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:34.554530 3116606 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"True"
	I0729 11:14:34.554559 3116606 pod_ready.go:81] duration metric: took 8.506134683s for pod "kube-apiserver-old-k8s-version-398652" in "kube-system" namespace to be "Ready" ...
	I0729 11:14:34.554571 3116606 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace to be "Ready" ...
	I0729 11:14:36.561705 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:38.565767 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:41.060963 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:43.561714 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:45.563158 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:48.062564 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:50.078592 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:52.563949 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:54.569365 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:57.069425 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:14:59.561462 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:01.563404 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:04.061672 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:06.561429 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:08.561635 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:11.063447 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:13.563167 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:16.060740 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:18.063995 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:20.562870 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:23.060687 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:25.062517 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:27.562196 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:29.563070 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:32.061881 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:34.561013 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:36.561115 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:38.567306 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:41.065457 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:43.562218 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:45.562258 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:47.568485 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:50.060895 3116606 pod_ready.go:102] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:52.060941 3116606 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"True"
	I0729 11:15:52.060970 3116606 pod_ready.go:81] duration metric: took 1m17.506390971s for pod "kube-controller-manager-old-k8s-version-398652" in "kube-system" namespace to be "Ready" ...
	I0729 11:15:52.060983 3116606 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jzn6w" in "kube-system" namespace to be "Ready" ...
	I0729 11:15:52.065795 3116606 pod_ready.go:92] pod "kube-proxy-jzn6w" in "kube-system" namespace has status "Ready":"True"
	I0729 11:15:52.065833 3116606 pod_ready.go:81] duration metric: took 4.831208ms for pod "kube-proxy-jzn6w" in "kube-system" namespace to be "Ready" ...
	I0729 11:15:52.065847 3116606 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-398652" in "kube-system" namespace to be "Ready" ...
	I0729 11:15:54.072592 3116606 pod_ready.go:102] pod "kube-scheduler-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"False"
	I0729 11:15:56.072292 3116606 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-398652" in "kube-system" namespace has status "Ready":"True"
	I0729 11:15:56.072361 3116606 pod_ready.go:81] duration metric: took 4.006500964s for pod "kube-scheduler-old-k8s-version-398652" in "kube-system" namespace to be "Ready" ...
	I0729 11:15:56.072388 3116606 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace to be "Ready" ...
	I0729 11:15:58.079246 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:00.116137 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:02.578666 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:05.079377 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:07.579059 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:10.079470 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:12.578761 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:14.579911 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:17.078971 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:19.578607 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:21.578959 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:23.579185 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:26.078681 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:28.578807 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:31.078501 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:33.079388 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:35.578606 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:38.078985 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:40.578519 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:42.578698 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:45.082128 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:47.578663 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:50.079071 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:52.578955 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:55.078701 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:57.078937 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:16:59.579629 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:02.079265 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:04.080073 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:06.080556 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:08.080764 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:10.581593 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:13.079390 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:15.579226 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:18.080494 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:20.578665 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:22.579557 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:25.079093 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:27.079338 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:29.578032 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:31.578840 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:34.078811 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:36.079229 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:38.079838 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:40.578327 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:42.578467 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:44.578797 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:46.579231 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:49.078620 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:51.079191 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:53.577714 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:55.583114 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:17:58.079252 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:00.103400 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:02.579252 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:04.579966 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:07.078200 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:09.080217 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:11.579217 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:14.078765 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:16.078984 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:18.079121 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:20.579399 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:23.078356 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:25.079728 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:27.080407 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:29.579038 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:32.079124 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:34.585137 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:37.078654 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:39.078988 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:41.578300 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:43.579381 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:46.078663 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:48.080945 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:50.579021 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:53.078976 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:55.079324 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:57.578349 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:18:59.579067 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:02.078176 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:04.079491 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:06.580120 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:09.078513 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:11.079138 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:13.578104 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:15.578987 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:18.079191 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:20.080606 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:22.578701 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:24.579122 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:27.090123 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:29.579210 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:31.581357 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:34.080745 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:36.579612 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:39.079652 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:41.578861 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:43.579654 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:46.080848 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:48.681862 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:51.089008 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:53.585236 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:56.079650 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:56.079681 3116606 pod_ready.go:81] duration metric: took 4m0.007278229s for pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace to be "Ready" ...
	E0729 11:19:56.079691 3116606 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 11:19:56.079699 3116606 pod_ready.go:38] duration metric: took 5m30.312575574s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:19:56.079715 3116606 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:19:56.079751 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:19:56.079855 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:19:56.130557 3116606 cri.go:89] found id: "55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded"
	I0729 11:19:56.130584 3116606 cri.go:89] found id: "8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:19:56.130596 3116606 cri.go:89] found id: ""
	I0729 11:19:56.130603 3116606 logs.go:276] 2 containers: [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded 8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738]
	I0729 11:19:56.130680 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.136126 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.140182 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0729 11:19:56.140313 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:19:56.185651 3116606 cri.go:89] found id: "d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75"
	I0729 11:19:56.185675 3116606 cri.go:89] found id: "587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:19:56.185680 3116606 cri.go:89] found id: ""
	I0729 11:19:56.185686 3116606 logs.go:276] 2 containers: [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75 587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c]
	I0729 11:19:56.185749 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.190850 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.196125 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0729 11:19:56.196194 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:19:56.237499 3116606 cri.go:89] found id: "ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098"
	I0729 11:19:56.237523 3116606 cri.go:89] found id: "d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:19:56.237528 3116606 cri.go:89] found id: ""
	I0729 11:19:56.237536 3116606 logs.go:276] 2 containers: [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098 d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449]
	I0729 11:19:56.237605 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.241499 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.245312 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:19:56.245388 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:19:56.295106 3116606 cri.go:89] found id: "92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a"
	I0729 11:19:56.295132 3116606 cri.go:89] found id: "7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:19:56.295136 3116606 cri.go:89] found id: ""
	I0729 11:19:56.295143 3116606 logs.go:276] 2 containers: [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a 7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e]
	I0729 11:19:56.295210 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.300410 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.304077 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:19:56.304172 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:19:56.355092 3116606 cri.go:89] found id: "54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747"
	I0729 11:19:56.355121 3116606 cri.go:89] found id: "b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:19:56.355163 3116606 cri.go:89] found id: ""
	I0729 11:19:56.355177 3116606 logs.go:276] 2 containers: [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747 b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133]
	I0729 11:19:56.355257 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.358978 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.362686 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:19:56.362798 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:19:56.419406 3116606 cri.go:89] found id: "8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc"
	I0729 11:19:56.419430 3116606 cri.go:89] found id: "789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:19:56.419436 3116606 cri.go:89] found id: ""
	I0729 11:19:56.419443 3116606 logs.go:276] 2 containers: [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc 789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889]
	I0729 11:19:56.419502 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.423246 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.427274 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0729 11:19:56.427354 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:19:56.471669 3116606 cri.go:89] found id: "be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c"
	I0729 11:19:56.471699 3116606 cri.go:89] found id: "e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:19:56.471704 3116606 cri.go:89] found id: ""
	I0729 11:19:56.471711 3116606 logs.go:276] 2 containers: [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e]
	I0729 11:19:56.471807 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.475384 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.478658 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0729 11:19:56.478747 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 11:19:56.535044 3116606 cri.go:89] found id: "63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a"
	I0729 11:19:56.535069 3116606 cri.go:89] found id: "c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:19:56.535075 3116606 cri.go:89] found id: ""
	I0729 11:19:56.535082 3116606 logs.go:276] 2 containers: [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4]
	I0729 11:19:56.535172 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.539290 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.543338 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:19:56.543434 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:19:56.598377 3116606 cri.go:89] found id: "0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191"
	I0729 11:19:56.598399 3116606 cri.go:89] found id: ""
	I0729 11:19:56.598407 3116606 logs.go:276] 1 containers: [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191]
	I0729 11:19:56.598496 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.603285 3116606 logs.go:123] Gathering logs for storage-provisioner [c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4] ...
	I0729 11:19:56.603323 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:19:56.650539 3116606 logs.go:123] Gathering logs for dmesg ...
	I0729 11:19:56.650571 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:19:56.680567 3116606 logs.go:123] Gathering logs for kube-apiserver [8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738] ...
	I0729 11:19:56.680598 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:19:56.763644 3116606 logs.go:123] Gathering logs for coredns [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098] ...
	I0729 11:19:56.763714 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098"
	I0729 11:19:56.829727 3116606 logs.go:123] Gathering logs for kube-scheduler [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a] ...
	I0729 11:19:56.829795 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a"
	I0729 11:19:56.875673 3116606 logs.go:123] Gathering logs for kube-proxy [b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133] ...
	I0729 11:19:56.875742 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:19:56.938328 3116606 logs.go:123] Gathering logs for kube-apiserver [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded] ...
	I0729 11:19:56.938397 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded"
	I0729 11:19:57.025730 3116606 logs.go:123] Gathering logs for coredns [d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449] ...
	I0729 11:19:57.025768 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:19:57.110787 3116606 logs.go:123] Gathering logs for storage-provisioner [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a] ...
	I0729 11:19:57.110817 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a"
	I0729 11:19:57.201956 3116606 logs.go:123] Gathering logs for kubernetes-dashboard [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191] ...
	I0729 11:19:57.201985 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191"
	I0729 11:19:57.272147 3116606 logs.go:123] Gathering logs for container status ...
	I0729 11:19:57.272177 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:19:57.350104 3116606 logs.go:123] Gathering logs for etcd [587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c] ...
	I0729 11:19:57.350142 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:19:57.424914 3116606 logs.go:123] Gathering logs for kube-proxy [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747] ...
	I0729 11:19:57.424950 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747"
	I0729 11:19:57.484351 3116606 logs.go:123] Gathering logs for kube-controller-manager [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc] ...
	I0729 11:19:57.484380 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc"
	I0729 11:19:57.555230 3116606 logs.go:123] Gathering logs for kube-controller-manager [789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889] ...
	I0729 11:19:57.555265 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:19:57.624667 3116606 logs.go:123] Gathering logs for kindnet [e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e] ...
	I0729 11:19:57.624706 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:19:57.714018 3116606 logs.go:123] Gathering logs for containerd ...
	I0729 11:19:57.714052 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0729 11:19:57.794340 3116606 logs.go:123] Gathering logs for kubelet ...
	I0729 11:19:57.794379 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 11:19:57.923004 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597182     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-7kgps": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-7kgps" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.923301 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597314     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.923554 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597368     661 reflector.go:138] object-"kube-system"/"metrics-server-token-jpdkd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-jpdkd" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.923815 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597412     661 reflector.go:138] object-"default"/"default-token-gc665": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gc665" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.924038 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597478     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.924292 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597531     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-bnfpv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-bnfpv" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.924525 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597580     661 reflector.go:138] object-"kube-system"/"coredns-token-gpx2v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-gpx2v" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.924759 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597484     661 reflector.go:138] object-"kube-system"/"kindnet-token-vw6mq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vw6mq" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.933060 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:27 old-k8s-version-398652 kubelet[661]: E0729 11:14:27.458872     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.933498 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:27 old-k8s-version-398652 kubelet[661]: E0729 11:14:27.878876     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.937020 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:41 old-k8s-version-398652 kubelet[661]: E0729 11:14:41.668429     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.939154 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:51 old-k8s-version-398652 kubelet[661]: E0729 11:14:51.984489     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.939509 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:52 old-k8s-version-398652 kubelet[661]: E0729 11:14:52.978054     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.939717 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:53 old-k8s-version-398652 kubelet[661]: E0729 11:14:53.673014     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.940417 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:57 old-k8s-version-398652 kubelet[661]: E0729 11:14:57.307270     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.943070 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:05 old-k8s-version-398652 kubelet[661]: E0729 11:15:05.669370     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.944235 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:10 old-k8s-version-398652 kubelet[661]: E0729 11:15:10.052366     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.944650 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:17 old-k8s-version-398652 kubelet[661]: E0729 11:15:17.335946     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.944892 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:17 old-k8s-version-398652 kubelet[661]: E0729 11:15:17.660197     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.945617 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:31 old-k8s-version-398652 kubelet[661]: E0729 11:15:31.131386     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.945870 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:31 old-k8s-version-398652 kubelet[661]: E0729 11:15:31.660183     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.946246 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:37 old-k8s-version-398652 kubelet[661]: E0729 11:15:37.307168     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.946464 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:42 old-k8s-version-398652 kubelet[661]: E0729 11:15:42.660275     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.946834 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:51 old-k8s-version-398652 kubelet[661]: E0729 11:15:51.660440     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.952808 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:54 old-k8s-version-398652 kubelet[661]: E0729 11:15:54.670644     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.953193 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:05 old-k8s-version-398652 kubelet[661]: E0729 11:16:05.659590     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.953407 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:07 old-k8s-version-398652 kubelet[661]: E0729 11:16:07.660440     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.954022 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:19 old-k8s-version-398652 kubelet[661]: E0729 11:16:19.271655     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.954219 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:19 old-k8s-version-398652 kubelet[661]: E0729 11:16:19.669252     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.954573 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:27 old-k8s-version-398652 kubelet[661]: E0729 11:16:27.307974     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.954782 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:34 old-k8s-version-398652 kubelet[661]: E0729 11:16:34.659936     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.955135 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:41 old-k8s-version-398652 kubelet[661]: E0729 11:16:41.659566     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.955347 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:49 old-k8s-version-398652 kubelet[661]: E0729 11:16:49.659815     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.955699 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:54 old-k8s-version-398652 kubelet[661]: E0729 11:16:54.660512     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.955988 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:02 old-k8s-version-398652 kubelet[661]: E0729 11:17:02.664093     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.956354 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:07 old-k8s-version-398652 kubelet[661]: E0729 11:17:07.659718     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.956547 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:13 old-k8s-version-398652 kubelet[661]: E0729 11:17:13.659955     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.956894 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:22 old-k8s-version-398652 kubelet[661]: E0729 11:17:22.659654     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.959376 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:24 old-k8s-version-398652 kubelet[661]: E0729 11:17:24.668985     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.959730 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:34 old-k8s-version-398652 kubelet[661]: E0729 11:17:34.659569     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.959931 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:36 old-k8s-version-398652 kubelet[661]: E0729 11:17:36.661730     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.960131 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:48 old-k8s-version-398652 kubelet[661]: E0729 11:17:48.661201     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.960746 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:50 old-k8s-version-398652 kubelet[661]: E0729 11:17:50.522025     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.961100 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:57 old-k8s-version-398652 kubelet[661]: E0729 11:17:57.307619     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.961310 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:01 old-k8s-version-398652 kubelet[661]: E0729 11:18:01.660045     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.961663 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:08 old-k8s-version-398652 kubelet[661]: E0729 11:18:08.661963     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.961872 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:16 old-k8s-version-398652 kubelet[661]: E0729 11:18:16.660611     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.962224 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:21 old-k8s-version-398652 kubelet[661]: E0729 11:18:21.659655     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.962437 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:29 old-k8s-version-398652 kubelet[661]: E0729 11:18:29.660643     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.962843 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:32 old-k8s-version-398652 kubelet[661]: E0729 11:18:32.659948     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.963045 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:40 old-k8s-version-398652 kubelet[661]: E0729 11:18:40.660579     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.963437 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:44 old-k8s-version-398652 kubelet[661]: E0729 11:18:44.660231     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.963644 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:51 old-k8s-version-398652 kubelet[661]: E0729 11:18:51.660081     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.964008 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:55 old-k8s-version-398652 kubelet[661]: E0729 11:18:55.660086     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.964217 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:02 old-k8s-version-398652 kubelet[661]: E0729 11:19:02.660084     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.964568 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:06 old-k8s-version-398652 kubelet[661]: E0729 11:19:06.660508     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.964774 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:17 old-k8s-version-398652 kubelet[661]: E0729 11:19:17.660133     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.965129 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: E0729 11:19:19.659640     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.965338 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:32 old-k8s-version-398652 kubelet[661]: E0729 11:19:32.660866     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.965689 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.965895 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.966246 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	I0729 11:19:57.966259 3116606 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:19:57.966273 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 11:19:58.173565 3116606 logs.go:123] Gathering logs for etcd [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75] ...
	I0729 11:19:58.173647 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75"
	I0729 11:19:58.225473 3116606 logs.go:123] Gathering logs for kube-scheduler [7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e] ...
	I0729 11:19:58.225504 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:19:58.276472 3116606 logs.go:123] Gathering logs for kindnet [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c] ...
	I0729 11:19:58.276507 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c"
	I0729 11:19:58.337556 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:19:58.337587 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 11:19:58.337646 3116606 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 11:19:58.337659 3116606 out.go:239]   Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: E0729 11:19:19.659640     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	  Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: E0729 11:19:19.659640     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:58.337677 3116606 out.go:239]   Jul 29 11:19:32 old-k8s-version-398652 kubelet[661]: E0729 11:19:32.660866     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 29 11:19:32 old-k8s-version-398652 kubelet[661]: E0729 11:19:32.660866     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:58.337688 3116606 out.go:239]   Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	  Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:58.337695 3116606 out.go:239]   Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:58.337701 3116606 out.go:239]   Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	  Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	I0729 11:19:58.337708 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:19:58.337714 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:20:08.338434 3116606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:20:08.357291 3116606 api_server.go:72] duration metric: took 5m59.814002768s to wait for apiserver process to appear ...
	I0729 11:20:08.357315 3116606 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:20:08.357357 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:20:08.357418 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:20:08.443602 3116606 cri.go:89] found id: "55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded"
	I0729 11:20:08.443628 3116606 cri.go:89] found id: "8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:20:08.443633 3116606 cri.go:89] found id: ""
	I0729 11:20:08.443642 3116606 logs.go:276] 2 containers: [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded 8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738]
	I0729 11:20:08.443713 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.447894 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.456460 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0729 11:20:08.456538 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:20:08.509962 3116606 cri.go:89] found id: "d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75"
	I0729 11:20:08.509988 3116606 cri.go:89] found id: "587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:20:08.509993 3116606 cri.go:89] found id: ""
	I0729 11:20:08.510000 3116606 logs.go:276] 2 containers: [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75 587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c]
	I0729 11:20:08.510092 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.514390 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.518891 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0729 11:20:08.518976 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:20:08.589652 3116606 cri.go:89] found id: "ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098"
	I0729 11:20:08.589677 3116606 cri.go:89] found id: "d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:20:08.589682 3116606 cri.go:89] found id: ""
	I0729 11:20:08.589689 3116606 logs.go:276] 2 containers: [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098 d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449]
	I0729 11:20:08.589748 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.594238 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.601491 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:20:08.601565 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:20:08.670209 3116606 cri.go:89] found id: "92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a"
	I0729 11:20:08.670234 3116606 cri.go:89] found id: "7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:20:08.670247 3116606 cri.go:89] found id: ""
	I0729 11:20:08.670271 3116606 logs.go:276] 2 containers: [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a 7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e]
	I0729 11:20:08.670369 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.676313 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.682047 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:20:08.682176 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:20:08.768290 3116606 cri.go:89] found id: "54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747"
	I0729 11:20:08.768312 3116606 cri.go:89] found id: "b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:20:08.768317 3116606 cri.go:89] found id: ""
	I0729 11:20:08.768324 3116606 logs.go:276] 2 containers: [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747 b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133]
	I0729 11:20:08.768422 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.780515 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.792160 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:20:08.792258 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:20:08.939024 3116606 cri.go:89] found id: "8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc"
	I0729 11:20:08.939045 3116606 cri.go:89] found id: "789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:20:08.939050 3116606 cri.go:89] found id: ""
	I0729 11:20:08.939057 3116606 logs.go:276] 2 containers: [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc 789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889]
	I0729 11:20:08.939163 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.943529 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.949463 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0729 11:20:08.949547 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:20:08.999394 3116606 cri.go:89] found id: "be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c"
	I0729 11:20:08.999418 3116606 cri.go:89] found id: "e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:20:08.999423 3116606 cri.go:89] found id: ""
	I0729 11:20:08.999431 3116606 logs.go:276] 2 containers: [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e]
	I0729 11:20:08.999503 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.004747 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.009729 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:20:09.009812 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:20:09.070708 3116606 cri.go:89] found id: "0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191"
	I0729 11:20:09.070731 3116606 cri.go:89] found id: ""
	I0729 11:20:09.070739 3116606 logs.go:276] 1 containers: [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191]
	I0729 11:20:09.070808 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.075996 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0729 11:20:09.076084 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 11:20:09.132498 3116606 cri.go:89] found id: "63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a"
	I0729 11:20:09.132527 3116606 cri.go:89] found id: "c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:20:09.132532 3116606 cri.go:89] found id: ""
	I0729 11:20:09.132539 3116606 logs.go:276] 2 containers: [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4]
	I0729 11:20:09.132621 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.137022 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.141269 3116606 logs.go:123] Gathering logs for kubelet ...
	I0729 11:20:09.141305 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 11:20:09.248117 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597182     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-7kgps": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-7kgps" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.248357 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597314     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.248595 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597368     661 reflector.go:138] object-"kube-system"/"metrics-server-token-jpdkd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-jpdkd" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.248812 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597412     661 reflector.go:138] object-"default"/"default-token-gc665": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gc665" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.249020 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597478     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.249255 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597531     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-bnfpv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-bnfpv" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.249478 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597580     661 reflector.go:138] object-"kube-system"/"coredns-token-gpx2v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-gpx2v" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.249694 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597484     661 reflector.go:138] object-"kube-system"/"kindnet-token-vw6mq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vw6mq" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.259399 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:27 old-k8s-version-398652 kubelet[661]: E0729 11:14:27.458872     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.265320 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:27 old-k8s-version-398652 kubelet[661]: E0729 11:14:27.878876     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.269132 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:41 old-k8s-version-398652 kubelet[661]: E0729 11:14:41.668429     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.271415 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:51 old-k8s-version-398652 kubelet[661]: E0729 11:14:51.984489     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.271788 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:52 old-k8s-version-398652 kubelet[661]: E0729 11:14:52.978054     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.272069 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:53 old-k8s-version-398652 kubelet[661]: E0729 11:14:53.673014     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.272815 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:57 old-k8s-version-398652 kubelet[661]: E0729 11:14:57.307270     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.275520 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:05 old-k8s-version-398652 kubelet[661]: E0729 11:15:05.669370     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.276519 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:10 old-k8s-version-398652 kubelet[661]: E0729 11:15:10.052366     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.278566 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:17 old-k8s-version-398652 kubelet[661]: E0729 11:15:17.335946     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.278787 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:17 old-k8s-version-398652 kubelet[661]: E0729 11:15:17.660197     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.279403 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:31 old-k8s-version-398652 kubelet[661]: E0729 11:15:31.131386     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.279603 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:31 old-k8s-version-398652 kubelet[661]: E0729 11:15:31.660183     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.279975 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:37 old-k8s-version-398652 kubelet[661]: E0729 11:15:37.307168     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.280170 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:42 old-k8s-version-398652 kubelet[661]: E0729 11:15:42.660275     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.280506 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:51 old-k8s-version-398652 kubelet[661]: E0729 11:15:51.660440     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.283074 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:54 old-k8s-version-398652 kubelet[661]: E0729 11:15:54.670644     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.283413 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:05 old-k8s-version-398652 kubelet[661]: E0729 11:16:05.659590     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.283604 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:07 old-k8s-version-398652 kubelet[661]: E0729 11:16:07.660440     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.286269 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:19 old-k8s-version-398652 kubelet[661]: E0729 11:16:19.271655     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.286478 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:19 old-k8s-version-398652 kubelet[661]: E0729 11:16:19.669252     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.286816 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:27 old-k8s-version-398652 kubelet[661]: E0729 11:16:27.307974     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.287015 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:34 old-k8s-version-398652 kubelet[661]: E0729 11:16:34.659936     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.287353 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:41 old-k8s-version-398652 kubelet[661]: E0729 11:16:41.659566     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.287543 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:49 old-k8s-version-398652 kubelet[661]: E0729 11:16:49.659815     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.288736 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:54 old-k8s-version-398652 kubelet[661]: E0729 11:16:54.660512     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.288942 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:02 old-k8s-version-398652 kubelet[661]: E0729 11:17:02.664093     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.289292 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:07 old-k8s-version-398652 kubelet[661]: E0729 11:17:07.659718     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.289488 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:13 old-k8s-version-398652 kubelet[661]: E0729 11:17:13.659955     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.289826 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:22 old-k8s-version-398652 kubelet[661]: E0729 11:17:22.659654     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.292345 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:24 old-k8s-version-398652 kubelet[661]: E0729 11:17:24.668985     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.292700 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:34 old-k8s-version-398652 kubelet[661]: E0729 11:17:34.659569     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.292890 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:36 old-k8s-version-398652 kubelet[661]: E0729 11:17:36.661730     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.293078 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:48 old-k8s-version-398652 kubelet[661]: E0729 11:17:48.661201     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.293681 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:50 old-k8s-version-398652 kubelet[661]: E0729 11:17:50.522025     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.294017 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:57 old-k8s-version-398652 kubelet[661]: E0729 11:17:57.307619     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.294208 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:01 old-k8s-version-398652 kubelet[661]: E0729 11:18:01.660045     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.294544 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:08 old-k8s-version-398652 kubelet[661]: E0729 11:18:08.661963     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.294733 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:16 old-k8s-version-398652 kubelet[661]: E0729 11:18:16.660611     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.295068 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:21 old-k8s-version-398652 kubelet[661]: E0729 11:18:21.659655     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.295256 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:29 old-k8s-version-398652 kubelet[661]: E0729 11:18:29.660643     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.295601 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:32 old-k8s-version-398652 kubelet[661]: E0729 11:18:32.659948     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.295982 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:40 old-k8s-version-398652 kubelet[661]: E0729 11:18:40.660579     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.296323 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:44 old-k8s-version-398652 kubelet[661]: E0729 11:18:44.660231     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.296513 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:51 old-k8s-version-398652 kubelet[661]: E0729 11:18:51.660081     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.296859 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:55 old-k8s-version-398652 kubelet[661]: E0729 11:18:55.660086     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.297049 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:02 old-k8s-version-398652 kubelet[661]: E0729 11:19:02.660084     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.297385 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:06 old-k8s-version-398652 kubelet[661]: E0729 11:19:06.660508     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.297575 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:17 old-k8s-version-398652 kubelet[661]: E0729 11:19:17.660133     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.297912 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: E0729 11:19:19.659640     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.298101 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:32 old-k8s-version-398652 kubelet[661]: E0729 11:19:32.660866     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.298437 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.298629 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.298964 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.299158 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:58 old-k8s-version-398652 kubelet[661]: E0729 11:19:58.666044     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.299494 3116606 logs.go:138] Found kubelet problem: Jul 29 11:20:02 old-k8s-version-398652 kubelet[661]: E0729 11:20:02.660579     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	I0729 11:20:09.299504 3116606 logs.go:123] Gathering logs for etcd [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75] ...
	I0729 11:20:09.299518 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75"
	I0729 11:20:09.373979 3116606 logs.go:123] Gathering logs for kindnet [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c] ...
	I0729 11:20:09.374044 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c"
	I0729 11:20:09.477108 3116606 logs.go:123] Gathering logs for kube-scheduler [7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e] ...
	I0729 11:20:09.477158 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:20:09.557836 3116606 logs.go:123] Gathering logs for kube-controller-manager [789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889] ...
	I0729 11:20:09.558052 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:20:09.642070 3116606 logs.go:123] Gathering logs for kindnet [e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e] ...
	I0729 11:20:09.642145 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:20:09.751747 3116606 logs.go:123] Gathering logs for dmesg ...
	I0729 11:20:09.751827 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:20:09.779246 3116606 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:20:09.779276 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 11:20:09.988272 3116606 logs.go:123] Gathering logs for kube-apiserver [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded] ...
	I0729 11:20:09.988348 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded"
	I0729 11:20:10.084649 3116606 logs.go:123] Gathering logs for kube-apiserver [8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738] ...
	I0729 11:20:10.084736 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:20:10.182842 3116606 logs.go:123] Gathering logs for kube-scheduler [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a] ...
	I0729 11:20:10.182931 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a"
	I0729 11:20:10.299400 3116606 logs.go:123] Gathering logs for containerd ...
	I0729 11:20:10.299424 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0729 11:20:10.366583 3116606 logs.go:123] Gathering logs for kube-controller-manager [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc] ...
	I0729 11:20:10.366659 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc"
	I0729 11:20:10.454249 3116606 logs.go:123] Gathering logs for storage-provisioner [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a] ...
	I0729 11:20:10.454348 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a"
	I0729 11:20:10.519773 3116606 logs.go:123] Gathering logs for container status ...
	I0729 11:20:10.519822 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:20:10.579925 3116606 logs.go:123] Gathering logs for etcd [587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c] ...
	I0729 11:20:10.579972 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:20:10.645837 3116606 logs.go:123] Gathering logs for coredns [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098] ...
	I0729 11:20:10.645871 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098"
	I0729 11:20:10.710387 3116606 logs.go:123] Gathering logs for coredns [d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449] ...
	I0729 11:20:10.710423 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:20:10.763078 3116606 logs.go:123] Gathering logs for kube-proxy [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747] ...
	I0729 11:20:10.763114 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747"
	I0729 11:20:10.819727 3116606 logs.go:123] Gathering logs for kube-proxy [b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133] ...
	I0729 11:20:10.819756 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:20:10.876517 3116606 logs.go:123] Gathering logs for kubernetes-dashboard [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191] ...
	I0729 11:20:10.876546 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191"
	I0729 11:20:10.961902 3116606 logs.go:123] Gathering logs for storage-provisioner [c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4] ...
	I0729 11:20:10.961934 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:20:11.048902 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:20:11.048930 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 11:20:11.048991 3116606 out.go:239] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0729 11:20:11.049007 3116606 out.go:239]   Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	  Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:11.049015 3116606 out.go:239]   Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:11.049024 3116606 out.go:239]   Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	  Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:11.049157 3116606 out.go:239]   Jul 29 11:19:58 old-k8s-version-398652 kubelet[661]: E0729 11:19:58.666044     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Jul 29 11:19:58 old-k8s-version-398652 kubelet[661]: E0729 11:19:58.666044     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:11.049170 3116606 out.go:239]   Jul 29 11:20:02 old-k8s-version-398652 kubelet[661]: E0729 11:20:02.660579     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	  Jul 29 11:20:02 old-k8s-version-398652 kubelet[661]: E0729 11:20:02.660579     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	I0729 11:20:11.049180 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:20:11.049187 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:20:21.050100 3116606 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0729 11:20:21.060161 3116606 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0729 11:20:21.063295 3116606 out.go:177] 
	W0729 11:20:21.065653 3116606 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0729 11:20:21.065698 3116606 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	* Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0729 11:20:21.065717 3116606 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	* Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0729 11:20:21.065725 3116606 out.go:239] * 
	* 
	W0729 11:20:21.067108 3116606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:20:21.068724 3116606 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-398652 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 102
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-398652
helpers_test.go:235: (dbg) docker inspect old-k8s-version-398652:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "68808f0091f74ff82c8a20f2a518a947b2c640415d67b68d801ac6172dfa3a27",
	        "Created": "2024-07-29T11:11:00.431578038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3116970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-07-29T11:14:00.466445301Z",
	            "FinishedAt": "2024-07-29T11:13:59.055510692Z"
	        },
	        "Image": "sha256:2cd84ab2172023a68162f38a55db46353562cea41552fd8e8bdec97b31b2c495",
	        "ResolvConfPath": "/var/lib/docker/containers/68808f0091f74ff82c8a20f2a518a947b2c640415d67b68d801ac6172dfa3a27/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/68808f0091f74ff82c8a20f2a518a947b2c640415d67b68d801ac6172dfa3a27/hostname",
	        "HostsPath": "/var/lib/docker/containers/68808f0091f74ff82c8a20f2a518a947b2c640415d67b68d801ac6172dfa3a27/hosts",
	        "LogPath": "/var/lib/docker/containers/68808f0091f74ff82c8a20f2a518a947b2c640415d67b68d801ac6172dfa3a27/68808f0091f74ff82c8a20f2a518a947b2c640415d67b68d801ac6172dfa3a27-json.log",
	        "Name": "/old-k8s-version-398652",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-398652:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-398652",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/bc897bbc47db7d6f4c213babf32c7f0530b0521cd35a88b93bd11c2f7194904f-init/diff:/var/lib/docker/overlay2/b09444c3e24393d9bf23bfbe615192567d3e49b78ae04c34cc2ea1bd8f080cde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bc897bbc47db7d6f4c213babf32c7f0530b0521cd35a88b93bd11c2f7194904f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bc897bbc47db7d6f4c213babf32c7f0530b0521cd35a88b93bd11c2f7194904f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bc897bbc47db7d6f4c213babf32c7f0530b0521cd35a88b93bd11c2f7194904f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-398652",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-398652/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-398652",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-398652",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-398652",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "31a9745f938651e5737191e697be9a683254a9115f7381e7e5bbd1d160ce199d",
	            "SandboxKey": "/var/run/docker/netns/31a9745f9386",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36764"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36765"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36768"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36766"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "36767"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-398652": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "39a243fa8be890e8f2e2c051f77ee4d2078ce96997e71a56defeeac912356d56",
	                    "EndpointID": "54b6be944e3141ed7b59dcda103dceeaf9a0c2ebfdb6e373719694a9b5428065",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-398652",
	                        "68808f0091f7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-398652 -n old-k8s-version-398652
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-398652 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-398652 logs -n 25: (2.518441405s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-262221                              | cert-expiration-262221   | jenkins | v1.33.1 | 29 Jul 24 11:09 UTC | 29 Jul 24 11:10 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-680883                               | force-systemd-env-680883 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-680883                            | force-systemd-env-680883 | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	| start   | -p cert-options-873297                                 | cert-options-873297      | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-873297 ssh                                | cert-options-873297      | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-873297 -- sudo                         | cert-options-873297      | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-873297                                 | cert-options-873297      | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:10 UTC |
	| start   | -p old-k8s-version-398652                              | old-k8s-version-398652   | jenkins | v1.33.1 | 29 Jul 24 11:10 UTC | 29 Jul 24 11:13 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-262221                              | cert-expiration-262221   | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:13 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-262221                              | cert-expiration-262221   | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:13 UTC |
	| start   | -p no-preload-707151 --memory=2200                     | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:14 UTC |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --preload=false --driver=docker                        |                          |         |         |                     |                     |
	|         |  --container-runtime=containerd                        |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-398652        | old-k8s-version-398652   | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:13 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-398652                              | old-k8s-version-398652   | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:13 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-398652             | old-k8s-version-398652   | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC | 29 Jul 24 11:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-398652                              | old-k8s-version-398652   | jenkins | v1.33.1 | 29 Jul 24 11:13 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-707151             | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:14 UTC | 29 Jul 24 11:14 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-707151                                   | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:14 UTC | 29 Jul 24 11:15 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-707151                  | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:15 UTC | 29 Jul 24 11:15 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-707151 --memory=2200                     | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:15 UTC | 29 Jul 24 11:19 UTC |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --preload=false --driver=docker                        |                          |         |         |                     |                     |
	|         |  --container-runtime=containerd                        |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0                    |                          |         |         |                     |                     |
	| image   | no-preload-707151 image list                           | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:19 UTC | 29 Jul 24 11:19 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-707151                                   | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:19 UTC | 29 Jul 24 11:19 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-707151                                   | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:19 UTC | 29 Jul 24 11:19 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-707151                                   | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:19 UTC | 29 Jul 24 11:19 UTC |
	| delete  | -p no-preload-707151                                   | no-preload-707151        | jenkins | v1.33.1 | 29 Jul 24 11:19 UTC | 29 Jul 24 11:19 UTC |
	| start   | -p embed-certs-483052                                  | embed-certs-483052       | jenkins | v1.33.1 | 29 Jul 24 11:19 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 11:19:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 11:19:52.293627 3127158 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:19:52.293807 3127158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:19:52.293819 3127158 out.go:304] Setting ErrFile to fd 2...
	I0729 11:19:52.293825 3127158 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:19:52.294056 3127158 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 11:19:52.294500 3127158 out.go:298] Setting JSON to false
	I0729 11:19:52.295619 3127158 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":68543,"bootTime":1722183450,"procs":226,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 11:19:52.295692 3127158 start.go:139] virtualization:  
	I0729 11:19:52.298954 3127158 out.go:177] * [embed-certs-483052] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0729 11:19:52.302165 3127158 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:19:52.302220 3127158 notify.go:220] Checking for updates...
	I0729 11:19:52.307326 3127158 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:19:52.310588 3127158 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 11:19:52.317061 3127158 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 11:19:52.319634 3127158 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0729 11:19:52.322213 3127158 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:19:52.325662 3127158 config.go:182] Loaded profile config "old-k8s-version-398652": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0729 11:19:52.325761 3127158 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:19:52.356148 3127158 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 11:19:52.356265 3127158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:19:52.411189 3127158 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-29 11:19:52.401757428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 11:19:52.411303 3127158 docker.go:307] overlay module found
	I0729 11:19:52.440740 3127158 out.go:177] * Using the docker driver based on user configuration
	I0729 11:19:52.449777 3127158 start.go:297] selected driver: docker
	I0729 11:19:52.449802 3127158 start.go:901] validating driver "docker" against <nil>
	I0729 11:19:52.449816 3127158 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:19:52.450471 3127158 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:19:52.504325 3127158 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-29 11:19:52.494560252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 11:19:52.504497 3127158 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 11:19:52.504736 3127158 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0729 11:19:52.506634 3127158 out.go:177] * Using Docker driver with root privileges
	I0729 11:19:52.508520 3127158 cni.go:84] Creating CNI manager for ""
	I0729 11:19:52.508547 3127158 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 11:19:52.508557 3127158 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 11:19:52.508681 3127158 start.go:340] cluster config:
	{Name:embed-certs-483052 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-483052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:19:52.510708 3127158 out.go:177] * Starting "embed-certs-483052" primary control-plane node in "embed-certs-483052" cluster
	I0729 11:19:52.512497 3127158 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0729 11:19:52.514349 3127158 out.go:177] * Pulling base image v0.0.44-1721902582-19326 ...
	I0729 11:19:52.516225 3127158 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0729 11:19:52.516285 3127158 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0729 11:19:52.516300 3127158 cache.go:56] Caching tarball of preloaded images
	I0729 11:19:52.516397 3127158 preload.go:172] Found /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0729 11:19:52.516413 3127158 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on containerd
	I0729 11:19:52.516514 3127158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/config.json ...
	I0729 11:19:52.516537 3127158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/config.json: {Name:mk783f6356a6da3bdd82938e858a1a6851917650 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:19:52.516631 3127158 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	W0729 11:19:52.537274 3127158 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 is of wrong architecture
	I0729 11:19:52.537305 3127158 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 11:19:52.537387 3127158 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 11:19:52.537410 3127158 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 11:19:52.537416 3127158 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 11:19:52.537424 3127158 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 11:19:52.537432 3127158 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from local cache
	I0729 11:19:52.659867 3127158 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 from cached tarball
	I0729 11:19:52.659900 3127158 cache.go:194] Successfully downloaded all kic artifacts
	I0729 11:19:52.659945 3127158 start.go:360] acquireMachinesLock for embed-certs-483052: {Name:mk1182563c15a9347eba4115bc772fde1f05fcd6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0729 11:19:52.660469 3127158 start.go:364] duration metric: took 501.914µs to acquireMachinesLock for "embed-certs-483052"
	I0729 11:19:52.660508 3127158 start.go:93] Provisioning new machine with config: &{Name:embed-certs-483052 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-483052 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0729 11:19:52.660667 3127158 start.go:125] createHost starting for "" (driver="docker")
	I0729 11:19:51.089008 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:53.585236 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:52.664439 3127158 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0729 11:19:52.665428 3127158 start.go:159] libmachine.API.Create for "embed-certs-483052" (driver="docker")
	I0729 11:19:52.665468 3127158 client.go:168] LocalClient.Create starting
	I0729 11:19:52.665547 3127158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem
	I0729 11:19:52.665582 3127158 main.go:141] libmachine: Decoding PEM data...
	I0729 11:19:52.665598 3127158 main.go:141] libmachine: Parsing certificate...
	I0729 11:19:52.665652 3127158 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem
	I0729 11:19:52.665669 3127158 main.go:141] libmachine: Decoding PEM data...
	I0729 11:19:52.665683 3127158 main.go:141] libmachine: Parsing certificate...
	I0729 11:19:52.666044 3127158 cli_runner.go:164] Run: docker network inspect embed-certs-483052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0729 11:19:52.681521 3127158 cli_runner.go:211] docker network inspect embed-certs-483052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0729 11:19:52.681626 3127158 network_create.go:284] running [docker network inspect embed-certs-483052] to gather additional debugging logs...
	I0729 11:19:52.681647 3127158 cli_runner.go:164] Run: docker network inspect embed-certs-483052
	W0729 11:19:52.696480 3127158 cli_runner.go:211] docker network inspect embed-certs-483052 returned with exit code 1
	I0729 11:19:52.696512 3127158 network_create.go:287] error running [docker network inspect embed-certs-483052]: docker network inspect embed-certs-483052: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-483052 not found
	I0729 11:19:52.696526 3127158 network_create.go:289] output of [docker network inspect embed-certs-483052]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-483052 not found
	
	** /stderr **
	I0729 11:19:52.696637 3127158 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:19:52.713243 3127158 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d7c37e03952f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:7a:61:62:42} reservation:<nil>}
	I0729 11:19:52.714305 3127158 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7d242985534e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:3a:98:02:de} reservation:<nil>}
	I0729 11:19:52.714875 3127158 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ad1a7866eabf IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:c2:2a:61:14} reservation:<nil>}
	I0729 11:19:52.715505 3127158 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-39a243fa8be8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:02:42:e6:47:ad:40} reservation:<nil>}
	I0729 11:19:52.716909 3127158 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400162ec40}
	I0729 11:19:52.716965 3127158 network_create.go:124] attempt to create docker network embed-certs-483052 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0729 11:19:52.717063 3127158 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-483052 embed-certs-483052
	I0729 11:19:52.803481 3127158 network_create.go:108] docker network embed-certs-483052 192.168.85.0/24 created
	I0729 11:19:52.803520 3127158 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-483052" container
	I0729 11:19:52.803606 3127158 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0729 11:19:52.820650 3127158 cli_runner.go:164] Run: docker volume create embed-certs-483052 --label name.minikube.sigs.k8s.io=embed-certs-483052 --label created_by.minikube.sigs.k8s.io=true
	I0729 11:19:52.838541 3127158 oci.go:103] Successfully created a docker volume embed-certs-483052
	I0729 11:19:52.838628 3127158 cli_runner.go:164] Run: docker run --rm --name embed-certs-483052-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-483052 --entrypoint /usr/bin/test -v embed-certs-483052:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -d /var/lib
	I0729 11:19:53.464763 3127158 oci.go:107] Successfully prepared a docker volume embed-certs-483052
	I0729 11:19:53.464830 3127158 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0729 11:19:53.464850 3127158 kic.go:194] Starting extracting preloaded images to volume ...
	I0729 11:19:53.464944 3127158 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-483052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir
	I0729 11:19:56.079650 3116606 pod_ready.go:102] pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace has status "Ready":"False"
	I0729 11:19:56.079681 3116606 pod_ready.go:81] duration metric: took 4m0.007278229s for pod "metrics-server-9975d5f86-c578w" in "kube-system" namespace to be "Ready" ...
	E0729 11:19:56.079691 3116606 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0729 11:19:56.079699 3116606 pod_ready.go:38] duration metric: took 5m30.312575574s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0729 11:19:56.079715 3116606 api_server.go:52] waiting for apiserver process to appear ...
	I0729 11:19:56.079751 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:19:56.079855 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:19:56.130557 3116606 cri.go:89] found id: "55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded"
	I0729 11:19:56.130584 3116606 cri.go:89] found id: "8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:19:56.130596 3116606 cri.go:89] found id: ""
	I0729 11:19:56.130603 3116606 logs.go:276] 2 containers: [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded 8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738]
	I0729 11:19:56.130680 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.136126 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.140182 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0729 11:19:56.140313 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:19:56.185651 3116606 cri.go:89] found id: "d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75"
	I0729 11:19:56.185675 3116606 cri.go:89] found id: "587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:19:56.185680 3116606 cri.go:89] found id: ""
	I0729 11:19:56.185686 3116606 logs.go:276] 2 containers: [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75 587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c]
	I0729 11:19:56.185749 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.190850 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.196125 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0729 11:19:56.196194 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:19:56.237499 3116606 cri.go:89] found id: "ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098"
	I0729 11:19:56.237523 3116606 cri.go:89] found id: "d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:19:56.237528 3116606 cri.go:89] found id: ""
	I0729 11:19:56.237536 3116606 logs.go:276] 2 containers: [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098 d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449]
	I0729 11:19:56.237605 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.241499 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.245312 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:19:56.245388 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:19:56.295106 3116606 cri.go:89] found id: "92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a"
	I0729 11:19:56.295132 3116606 cri.go:89] found id: "7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:19:56.295136 3116606 cri.go:89] found id: ""
	I0729 11:19:56.295143 3116606 logs.go:276] 2 containers: [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a 7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e]
	I0729 11:19:56.295210 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.300410 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.304077 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:19:56.304172 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:19:56.355092 3116606 cri.go:89] found id: "54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747"
	I0729 11:19:56.355121 3116606 cri.go:89] found id: "b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:19:56.355163 3116606 cri.go:89] found id: ""
	I0729 11:19:56.355177 3116606 logs.go:276] 2 containers: [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747 b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133]
	I0729 11:19:56.355257 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.358978 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.362686 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:19:56.362798 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:19:56.419406 3116606 cri.go:89] found id: "8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc"
	I0729 11:19:56.419430 3116606 cri.go:89] found id: "789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:19:56.419436 3116606 cri.go:89] found id: ""
	I0729 11:19:56.419443 3116606 logs.go:276] 2 containers: [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc 789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889]
	I0729 11:19:56.419502 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.423246 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.427274 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0729 11:19:56.427354 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:19:56.471669 3116606 cri.go:89] found id: "be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c"
	I0729 11:19:56.471699 3116606 cri.go:89] found id: "e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:19:56.471704 3116606 cri.go:89] found id: ""
	I0729 11:19:56.471711 3116606 logs.go:276] 2 containers: [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e]
	I0729 11:19:56.471807 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.475384 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.478658 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0729 11:19:56.478747 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 11:19:56.535044 3116606 cri.go:89] found id: "63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a"
	I0729 11:19:56.535069 3116606 cri.go:89] found id: "c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:19:56.535075 3116606 cri.go:89] found id: ""
	I0729 11:19:56.535082 3116606 logs.go:276] 2 containers: [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4]
	I0729 11:19:56.535172 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.539290 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.543338 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:19:56.543434 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:19:56.598377 3116606 cri.go:89] found id: "0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191"
	I0729 11:19:56.598399 3116606 cri.go:89] found id: ""
	I0729 11:19:56.598407 3116606 logs.go:276] 1 containers: [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191]
	I0729 11:19:56.598496 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:19:56.603285 3116606 logs.go:123] Gathering logs for storage-provisioner [c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4] ...
	I0729 11:19:56.603323 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:19:56.650539 3116606 logs.go:123] Gathering logs for dmesg ...
	I0729 11:19:56.650571 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:19:56.680567 3116606 logs.go:123] Gathering logs for kube-apiserver [8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738] ...
	I0729 11:19:56.680598 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:19:56.763644 3116606 logs.go:123] Gathering logs for coredns [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098] ...
	I0729 11:19:56.763714 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098"
	I0729 11:19:56.829727 3116606 logs.go:123] Gathering logs for kube-scheduler [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a] ...
	I0729 11:19:56.829795 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a"
	I0729 11:19:56.875673 3116606 logs.go:123] Gathering logs for kube-proxy [b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133] ...
	I0729 11:19:56.875742 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:19:56.938328 3116606 logs.go:123] Gathering logs for kube-apiserver [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded] ...
	I0729 11:19:56.938397 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded"
	I0729 11:19:57.025730 3116606 logs.go:123] Gathering logs for coredns [d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449] ...
	I0729 11:19:57.025768 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:19:57.110787 3116606 logs.go:123] Gathering logs for storage-provisioner [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a] ...
	I0729 11:19:57.110817 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a"
	I0729 11:19:57.201956 3116606 logs.go:123] Gathering logs for kubernetes-dashboard [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191] ...
	I0729 11:19:57.201985 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191"
	I0729 11:19:57.272147 3116606 logs.go:123] Gathering logs for container status ...
	I0729 11:19:57.272177 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:19:57.350104 3116606 logs.go:123] Gathering logs for etcd [587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c] ...
	I0729 11:19:57.350142 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:19:57.424914 3116606 logs.go:123] Gathering logs for kube-proxy [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747] ...
	I0729 11:19:57.424950 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747"
	I0729 11:19:57.484351 3116606 logs.go:123] Gathering logs for kube-controller-manager [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc] ...
	I0729 11:19:57.484380 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc"
	I0729 11:19:57.555230 3116606 logs.go:123] Gathering logs for kube-controller-manager [789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889] ...
	I0729 11:19:57.555265 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:19:57.624667 3116606 logs.go:123] Gathering logs for kindnet [e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e] ...
	I0729 11:19:57.624706 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:19:57.714018 3116606 logs.go:123] Gathering logs for containerd ...
	I0729 11:19:57.714052 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0729 11:19:57.794340 3116606 logs.go:123] Gathering logs for kubelet ...
	I0729 11:19:57.794379 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 11:19:57.923004 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597182     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-7kgps": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-7kgps" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.923301 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597314     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.923554 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597368     661 reflector.go:138] object-"kube-system"/"metrics-server-token-jpdkd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-jpdkd" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.923815 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597412     661 reflector.go:138] object-"default"/"default-token-gc665": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gc665" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.924038 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597478     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.924292 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597531     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-bnfpv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-bnfpv" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.924525 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597580     661 reflector.go:138] object-"kube-system"/"coredns-token-gpx2v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-gpx2v" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.924759 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597484     661 reflector.go:138] object-"kube-system"/"kindnet-token-vw6mq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vw6mq" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:19:57.933060 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:27 old-k8s-version-398652 kubelet[661]: E0729 11:14:27.458872     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.933498 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:27 old-k8s-version-398652 kubelet[661]: E0729 11:14:27.878876     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.937020 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:41 old-k8s-version-398652 kubelet[661]: E0729 11:14:41.668429     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.939154 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:51 old-k8s-version-398652 kubelet[661]: E0729 11:14:51.984489     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.939509 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:52 old-k8s-version-398652 kubelet[661]: E0729 11:14:52.978054     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.939717 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:53 old-k8s-version-398652 kubelet[661]: E0729 11:14:53.673014     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.940417 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:57 old-k8s-version-398652 kubelet[661]: E0729 11:14:57.307270     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.943070 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:05 old-k8s-version-398652 kubelet[661]: E0729 11:15:05.669370     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.944235 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:10 old-k8s-version-398652 kubelet[661]: E0729 11:15:10.052366     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.944650 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:17 old-k8s-version-398652 kubelet[661]: E0729 11:15:17.335946     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.944892 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:17 old-k8s-version-398652 kubelet[661]: E0729 11:15:17.660197     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.945617 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:31 old-k8s-version-398652 kubelet[661]: E0729 11:15:31.131386     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.945870 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:31 old-k8s-version-398652 kubelet[661]: E0729 11:15:31.660183     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.946246 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:37 old-k8s-version-398652 kubelet[661]: E0729 11:15:37.307168     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.946464 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:42 old-k8s-version-398652 kubelet[661]: E0729 11:15:42.660275     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.946834 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:51 old-k8s-version-398652 kubelet[661]: E0729 11:15:51.660440     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.952808 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:54 old-k8s-version-398652 kubelet[661]: E0729 11:15:54.670644     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.953193 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:05 old-k8s-version-398652 kubelet[661]: E0729 11:16:05.659590     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.953407 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:07 old-k8s-version-398652 kubelet[661]: E0729 11:16:07.660440     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.954022 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:19 old-k8s-version-398652 kubelet[661]: E0729 11:16:19.271655     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.954219 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:19 old-k8s-version-398652 kubelet[661]: E0729 11:16:19.669252     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.954573 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:27 old-k8s-version-398652 kubelet[661]: E0729 11:16:27.307974     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.954782 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:34 old-k8s-version-398652 kubelet[661]: E0729 11:16:34.659936     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.955135 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:41 old-k8s-version-398652 kubelet[661]: E0729 11:16:41.659566     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.955347 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:49 old-k8s-version-398652 kubelet[661]: E0729 11:16:49.659815     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.955699 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:54 old-k8s-version-398652 kubelet[661]: E0729 11:16:54.660512     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.955988 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:02 old-k8s-version-398652 kubelet[661]: E0729 11:17:02.664093     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.956354 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:07 old-k8s-version-398652 kubelet[661]: E0729 11:17:07.659718     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.956547 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:13 old-k8s-version-398652 kubelet[661]: E0729 11:17:13.659955     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.956894 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:22 old-k8s-version-398652 kubelet[661]: E0729 11:17:22.659654     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.959376 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:24 old-k8s-version-398652 kubelet[661]: E0729 11:17:24.668985     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:19:57.959730 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:34 old-k8s-version-398652 kubelet[661]: E0729 11:17:34.659569     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.959931 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:36 old-k8s-version-398652 kubelet[661]: E0729 11:17:36.661730     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.960131 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:48 old-k8s-version-398652 kubelet[661]: E0729 11:17:48.661201     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.960746 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:50 old-k8s-version-398652 kubelet[661]: E0729 11:17:50.522025     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.961100 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:57 old-k8s-version-398652 kubelet[661]: E0729 11:17:57.307619     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.961310 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:01 old-k8s-version-398652 kubelet[661]: E0729 11:18:01.660045     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.961663 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:08 old-k8s-version-398652 kubelet[661]: E0729 11:18:08.661963     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.961872 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:16 old-k8s-version-398652 kubelet[661]: E0729 11:18:16.660611     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.962224 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:21 old-k8s-version-398652 kubelet[661]: E0729 11:18:21.659655     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.962437 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:29 old-k8s-version-398652 kubelet[661]: E0729 11:18:29.660643     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.962843 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:32 old-k8s-version-398652 kubelet[661]: E0729 11:18:32.659948     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.963045 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:40 old-k8s-version-398652 kubelet[661]: E0729 11:18:40.660579     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.963437 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:44 old-k8s-version-398652 kubelet[661]: E0729 11:18:44.660231     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.963644 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:51 old-k8s-version-398652 kubelet[661]: E0729 11:18:51.660081     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.964008 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:55 old-k8s-version-398652 kubelet[661]: E0729 11:18:55.660086     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.964217 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:02 old-k8s-version-398652 kubelet[661]: E0729 11:19:02.660084     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.964568 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:06 old-k8s-version-398652 kubelet[661]: E0729 11:19:06.660508     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.964774 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:17 old-k8s-version-398652 kubelet[661]: E0729 11:19:17.660133     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.965129 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: E0729 11:19:19.659640     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.965338 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:32 old-k8s-version-398652 kubelet[661]: E0729 11:19:32.660866     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.965689 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:57.965895 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:57.966246 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	I0729 11:19:57.966259 3116606 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:19:57.966273 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 11:19:58.173565 3116606 logs.go:123] Gathering logs for etcd [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75] ...
	I0729 11:19:58.173647 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75"
	I0729 11:19:58.225473 3116606 logs.go:123] Gathering logs for kube-scheduler [7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e] ...
	I0729 11:19:58.225504 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:19:58.276472 3116606 logs.go:123] Gathering logs for kindnet [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c] ...
	I0729 11:19:58.276507 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c"
	I0729 11:19:58.337556 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:19:58.337587 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 11:19:58.337646 3116606 out.go:239] X Problems detected in kubelet:
	W0729 11:19:58.337659 3116606 out.go:239]   Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: E0729 11:19:19.659640     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:58.337677 3116606 out.go:239]   Jul 29 11:19:32 old-k8s-version-398652 kubelet[661]: E0729 11:19:32.660866     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:58.337688 3116606 out.go:239]   Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:19:58.337695 3116606 out.go:239]   Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:19:58.337701 3116606 out.go:239]   Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	I0729 11:19:58.337708 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:19:58.337714 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:19:59.232248 3127158 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-483052:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 -I lz4 -xf /preloaded.tar -C /extractDir: (5.767267527s)
	I0729 11:19:59.232288 3127158 kic.go:203] duration metric: took 5.767433656s to extract preloaded images to volume ...
	W0729 11:19:59.232461 3127158 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0729 11:19:59.232585 3127158 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0729 11:19:59.286745 3127158 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-483052 --name embed-certs-483052 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-483052 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-483052 --network embed-certs-483052 --ip 192.168.85.2 --volume embed-certs-483052:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7
	I0729 11:19:59.605894 3127158 cli_runner.go:164] Run: docker container inspect embed-certs-483052 --format={{.State.Running}}
	I0729 11:19:59.626973 3127158 cli_runner.go:164] Run: docker container inspect embed-certs-483052 --format={{.State.Status}}
	I0729 11:19:59.653626 3127158 cli_runner.go:164] Run: docker exec embed-certs-483052 stat /var/lib/dpkg/alternatives/iptables
	I0729 11:19:59.727610 3127158 oci.go:144] the created container "embed-certs-483052" has a running status.
	I0729 11:19:59.727653 3127158 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/embed-certs-483052/id_rsa...
	I0729 11:20:00.010606 3127158 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/embed-certs-483052/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0729 11:20:00.043232 3127158 cli_runner.go:164] Run: docker container inspect embed-certs-483052 --format={{.State.Status}}
	I0729 11:20:00.071610 3127158 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0729 11:20:00.071633 3127158 kic_runner.go:114] Args: [docker exec --privileged embed-certs-483052 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0729 11:20:00.421877 3127158 cli_runner.go:164] Run: docker container inspect embed-certs-483052 --format={{.State.Status}}
	I0729 11:20:00.507686 3127158 machine.go:94] provisionDockerMachine start ...
	I0729 11:20:00.507977 3127158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-483052
	I0729 11:20:00.599482 3127158 main.go:141] libmachine: Using SSH client type: native
	I0729 11:20:00.599824 3127158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36774 <nil> <nil>}
	I0729 11:20:00.599840 3127158 main.go:141] libmachine: About to run SSH command:
	hostname
	I0729 11:20:00.829538 3127158 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-483052
	
	I0729 11:20:00.829566 3127158 ubuntu.go:169] provisioning hostname "embed-certs-483052"
	I0729 11:20:00.829639 3127158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-483052
	I0729 11:20:00.864609 3127158 main.go:141] libmachine: Using SSH client type: native
	I0729 11:20:00.864849 3127158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36774 <nil> <nil>}
	I0729 11:20:00.864870 3127158 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-483052 && echo "embed-certs-483052" | sudo tee /etc/hostname
	I0729 11:20:01.035081 3127158 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-483052
	
	I0729 11:20:01.035173 3127158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-483052
	I0729 11:20:01.053159 3127158 main.go:141] libmachine: Using SSH client type: native
	I0729 11:20:01.053417 3127158 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e2cd0] 0x3e5530 <nil>  [] 0s} 127.0.0.1 36774 <nil> <nil>}
	I0729 11:20:01.053441 3127158 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-483052' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-483052/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-483052' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0729 11:20:01.192394 3127158 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0729 11:20:01.192424 3127158 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19337-2904404/.minikube CaCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19337-2904404/.minikube}
	I0729 11:20:01.192457 3127158 ubuntu.go:177] setting up certificates
	I0729 11:20:01.192473 3127158 provision.go:84] configureAuth start
	I0729 11:20:01.192543 3127158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-483052
	I0729 11:20:01.212610 3127158 provision.go:143] copyHostCerts
	I0729 11:20:01.214639 3127158 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.pem, removing ...
	I0729 11:20:01.214690 3127158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.pem
	I0729 11:20:01.214794 3127158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.pem (1078 bytes)
	I0729 11:20:01.214934 3127158 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-2904404/.minikube/cert.pem, removing ...
	I0729 11:20:01.214940 3127158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-2904404/.minikube/cert.pem
	I0729 11:20:01.214971 3127158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/cert.pem (1123 bytes)
	I0729 11:20:01.215037 3127158 exec_runner.go:144] found /home/jenkins/minikube-integration/19337-2904404/.minikube/key.pem, removing ...
	I0729 11:20:01.215054 3127158 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19337-2904404/.minikube/key.pem
	I0729 11:20:01.215087 3127158 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19337-2904404/.minikube/key.pem (1675 bytes)
	I0729 11:20:01.215303 3127158 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem org=jenkins.embed-certs-483052 san=[127.0.0.1 192.168.85.2 embed-certs-483052 localhost minikube]
	I0729 11:20:01.648413 3127158 provision.go:177] copyRemoteCerts
	I0729 11:20:01.648495 3127158 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0729 11:20:01.648542 3127158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-483052
	I0729 11:20:01.665668 3127158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36774 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/embed-certs-483052/id_rsa Username:docker}
	I0729 11:20:01.761246 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0729 11:20:01.790602 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0729 11:20:01.819421 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0729 11:20:01.846598 3127158 provision.go:87] duration metric: took 654.111147ms to configureAuth
	I0729 11:20:01.846628 3127158 ubuntu.go:193] setting minikube options for container-runtime
	I0729 11:20:01.846828 3127158 config.go:182] Loaded profile config "embed-certs-483052": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 11:20:01.846842 3127158 machine.go:97] duration metric: took 1.339067723s to provisionDockerMachine
	I0729 11:20:01.846849 3127158 client.go:171] duration metric: took 9.181373646s to LocalClient.Create
	I0729 11:20:01.846869 3127158 start.go:167] duration metric: took 9.181444333s to libmachine.API.Create "embed-certs-483052"
	I0729 11:20:01.846879 3127158 start.go:293] postStartSetup for "embed-certs-483052" (driver="docker")
	I0729 11:20:01.846889 3127158 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0729 11:20:01.846945 3127158 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0729 11:20:01.846990 3127158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-483052
	I0729 11:20:01.864619 3127158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36774 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/embed-certs-483052/id_rsa Username:docker}
	I0729 11:20:01.962156 3127158 ssh_runner.go:195] Run: cat /etc/os-release
	I0729 11:20:01.965749 3127158 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0729 11:20:01.965796 3127158 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0729 11:20:01.965808 3127158 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0729 11:20:01.965819 3127158 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0729 11:20:01.965833 3127158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-2904404/.minikube/addons for local assets ...
	I0729 11:20:01.965904 3127158 filesync.go:126] Scanning /home/jenkins/minikube-integration/19337-2904404/.minikube/files for local assets ...
	I0729 11:20:01.966001 3127158 filesync.go:149] local asset: /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/ssl/certs/29097892.pem -> 29097892.pem in /etc/ssl/certs
	I0729 11:20:01.966115 3127158 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0729 11:20:01.975511 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/ssl/certs/29097892.pem --> /etc/ssl/certs/29097892.pem (1708 bytes)
	I0729 11:20:02.009160 3127158 start.go:296] duration metric: took 162.262897ms for postStartSetup
	I0729 11:20:02.009624 3127158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-483052
	I0729 11:20:02.027677 3127158 profile.go:143] Saving config to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/config.json ...
	I0729 11:20:02.028083 3127158 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 11:20:02.028140 3127158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-483052
	I0729 11:20:02.045218 3127158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36774 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/embed-certs-483052/id_rsa Username:docker}
	I0729 11:20:02.141009 3127158 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0729 11:20:02.145890 3127158 start.go:128] duration metric: took 9.485205263s to createHost
	I0729 11:20:02.145915 3127158 start.go:83] releasing machines lock for "embed-certs-483052", held for 9.485429343s
	I0729 11:20:02.145990 3127158 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-483052
	I0729 11:20:02.163549 3127158 ssh_runner.go:195] Run: cat /version.json
	I0729 11:20:02.163563 3127158 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0729 11:20:02.163604 3127158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-483052
	I0729 11:20:02.163622 3127158 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-483052
	I0729 11:20:02.182288 3127158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36774 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/embed-certs-483052/id_rsa Username:docker}
	I0729 11:20:02.184803 3127158 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36774 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/embed-certs-483052/id_rsa Username:docker}
	I0729 11:20:02.399840 3127158 ssh_runner.go:195] Run: systemctl --version
	I0729 11:20:02.404740 3127158 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0729 11:20:02.409410 3127158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0729 11:20:02.437397 3127158 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0729 11:20:02.437483 3127158 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0729 11:20:02.469393 3127158 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0729 11:20:02.469418 3127158 start.go:495] detecting cgroup driver to use...
	I0729 11:20:02.469452 3127158 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0729 11:20:02.469507 3127158 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0729 11:20:02.482513 3127158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0729 11:20:02.495208 3127158 docker.go:217] disabling cri-docker service (if available) ...
	I0729 11:20:02.495273 3127158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0729 11:20:02.510035 3127158 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0729 11:20:02.525585 3127158 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0729 11:20:02.613616 3127158 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0729 11:20:02.726099 3127158 docker.go:233] disabling docker service ...
	I0729 11:20:02.726233 3127158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0729 11:20:02.749780 3127158 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0729 11:20:02.762675 3127158 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0729 11:20:02.860746 3127158 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0729 11:20:02.974714 3127158 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0729 11:20:02.988924 3127158 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0729 11:20:03.013215 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0729 11:20:03.026284 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0729 11:20:03.038463 3127158 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0729 11:20:03.038544 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0729 11:20:03.050712 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 11:20:03.063113 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0729 11:20:03.075860 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0729 11:20:03.088504 3127158 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0729 11:20:03.098905 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0729 11:20:03.109901 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0729 11:20:03.120520 3127158 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0729 11:20:03.131163 3127158 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0729 11:20:03.140642 3127158 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0729 11:20:03.150142 3127158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:20:03.234076 3127158 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0729 11:20:03.364421 3127158 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0729 11:20:03.364524 3127158 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0729 11:20:03.369103 3127158 start.go:563] Will wait 60s for crictl version
	I0729 11:20:03.369206 3127158 ssh_runner.go:195] Run: which crictl
	I0729 11:20:03.373082 3127158 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0729 11:20:03.420825 3127158 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.19
	RuntimeApiVersion:  v1
	I0729 11:20:03.420921 3127158 ssh_runner.go:195] Run: containerd --version
	I0729 11:20:03.445571 3127158 ssh_runner.go:195] Run: containerd --version
	I0729 11:20:03.476681 3127158 out.go:177] * Preparing Kubernetes v1.30.3 on containerd 1.7.19 ...
	I0729 11:20:03.478941 3127158 cli_runner.go:164] Run: docker network inspect embed-certs-483052 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0729 11:20:03.494526 3127158 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0729 11:20:03.498379 3127158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:20:03.510176 3127158 kubeadm.go:883] updating cluster {Name:embed-certs-483052 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-483052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0729 11:20:03.510316 3127158 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0729 11:20:03.510382 3127158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:20:03.560174 3127158 containerd.go:627] all images are preloaded for containerd runtime.
	I0729 11:20:03.560202 3127158 containerd.go:534] Images already preloaded, skipping extraction
	I0729 11:20:03.560268 3127158 ssh_runner.go:195] Run: sudo crictl images --output json
	I0729 11:20:03.598564 3127158 containerd.go:627] all images are preloaded for containerd runtime.
	I0729 11:20:03.598595 3127158 cache_images.go:84] Images are preloaded, skipping loading
	I0729 11:20:03.598604 3127158 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.30.3 containerd true true} ...
	I0729 11:20:03.598706 3127158 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-483052 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:embed-certs-483052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0729 11:20:03.598784 3127158 ssh_runner.go:195] Run: sudo crictl info
	I0729 11:20:03.644428 3127158 cni.go:84] Creating CNI manager for ""
	I0729 11:20:03.644452 3127158 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 11:20:03.644469 3127158 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0729 11:20:03.644502 3127158 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-483052 NodeName:embed-certs-483052 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0729 11:20:03.644684 3127158 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-483052"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0729 11:20:03.644799 3127158 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0729 11:20:03.654616 3127158 binaries.go:44] Found k8s binaries, skipping transfer
	I0729 11:20:03.654686 3127158 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0729 11:20:03.664104 3127158 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0729 11:20:03.683877 3127158 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0729 11:20:03.705080 3127158 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0729 11:20:03.724201 3127158 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0729 11:20:03.727765 3127158 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0729 11:20:03.739727 3127158 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0729 11:20:03.829896 3127158 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0729 11:20:03.847960 3127158 certs.go:68] Setting up /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052 for IP: 192.168.85.2
	I0729 11:20:03.847984 3127158 certs.go:194] generating shared ca certs ...
	I0729 11:20:03.848001 3127158 certs.go:226] acquiring lock for ca certs: {Name:mk2f7a1a044772cb2825bd46674f373ef156f656 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:20:03.848142 3127158 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.key
	I0729 11:20:03.848190 3127158 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.key
	I0729 11:20:03.848202 3127158 certs.go:256] generating profile certs ...
	I0729 11:20:03.848268 3127158 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/client.key
	I0729 11:20:03.848285 3127158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/client.crt with IP's: []
	I0729 11:20:04.245545 3127158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/client.crt ...
	I0729 11:20:04.245577 3127158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/client.crt: {Name:mk5fd7d9629a504b8275749b3f3ee6b6eadc98ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:20:04.246355 3127158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/client.key ...
	I0729 11:20:04.246374 3127158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/client.key: {Name:mk19dca522ae4caf2f2b9b22d07bc946d0dbf80c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:20:04.246475 3127158 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.key.8d1daac3
	I0729 11:20:04.246518 3127158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.crt.8d1daac3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0729 11:20:04.386929 3127158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.crt.8d1daac3 ...
	I0729 11:20:04.386959 3127158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.crt.8d1daac3: {Name:mk15ff3efe4d321e67a65aa365050e4dd2bd3b6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:20:04.387591 3127158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.key.8d1daac3 ...
	I0729 11:20:04.387609 3127158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.key.8d1daac3: {Name:mk51f60a0ecffcee7c60243759a36023af70af66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:20:04.388029 3127158 certs.go:381] copying /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.crt.8d1daac3 -> /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.crt
	I0729 11:20:04.388120 3127158 certs.go:385] copying /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.key.8d1daac3 -> /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.key
	I0729 11:20:04.388194 3127158 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/proxy-client.key
	I0729 11:20:04.388216 3127158 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/proxy-client.crt with IP's: []
	I0729 11:20:04.538475 3127158 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/proxy-client.crt ...
	I0729 11:20:04.538506 3127158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/proxy-client.crt: {Name:mkf807388a6725b9e0156df7656d4bb47439f3cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:20:04.539093 3127158 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/proxy-client.key ...
	I0729 11:20:04.539112 3127158 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/proxy-client.key: {Name:mk6986fe644c9b9c4a7269c48b356551159a056f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0729 11:20:04.539697 3127158 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/2909789.pem (1338 bytes)
	W0729 11:20:04.539769 3127158 certs.go:480] ignoring /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/2909789_empty.pem, impossibly tiny 0 bytes
	I0729 11:20:04.539820 3127158 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca-key.pem (1675 bytes)
	I0729 11:20:04.539854 3127158 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/ca.pem (1078 bytes)
	I0729 11:20:04.539882 3127158 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/cert.pem (1123 bytes)
	I0729 11:20:04.539916 3127158 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/key.pem (1675 bytes)
	I0729 11:20:04.539966 3127158 certs.go:484] found cert: /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/ssl/certs/29097892.pem (1708 bytes)
	I0729 11:20:04.540580 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0729 11:20:04.567843 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0729 11:20:04.597080 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0729 11:20:04.623289 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0729 11:20:04.653340 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0729 11:20:04.688146 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0729 11:20:04.718752 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0729 11:20:04.749173 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/embed-certs-483052/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0729 11:20:04.777192 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/ssl/certs/29097892.pem --> /usr/share/ca-certificates/29097892.pem (1708 bytes)
	I0729 11:20:04.807228 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0729 11:20:04.835030 3127158 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19337-2904404/.minikube/certs/2909789.pem --> /usr/share/ca-certificates/2909789.pem (1338 bytes)
	I0729 11:20:04.862572 3127158 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0729 11:20:04.889923 3127158 ssh_runner.go:195] Run: openssl version
	I0729 11:20:04.901562 3127158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0729 11:20:04.911256 3127158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:20:04.914864 3127158 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 29 10:24 /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:20:04.914930 3127158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0729 11:20:04.922189 3127158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0729 11:20:04.937477 3127158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2909789.pem && ln -fs /usr/share/ca-certificates/2909789.pem /etc/ssl/certs/2909789.pem"
	I0729 11:20:04.949880 3127158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2909789.pem
	I0729 11:20:04.953935 3127158 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 29 10:33 /usr/share/ca-certificates/2909789.pem
	I0729 11:20:04.954012 3127158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2909789.pem
	I0729 11:20:04.961991 3127158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2909789.pem /etc/ssl/certs/51391683.0"
	I0729 11:20:04.973947 3127158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/29097892.pem && ln -fs /usr/share/ca-certificates/29097892.pem /etc/ssl/certs/29097892.pem"
	I0729 11:20:04.983986 3127158 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/29097892.pem
	I0729 11:20:04.987927 3127158 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 29 10:33 /usr/share/ca-certificates/29097892.pem
	I0729 11:20:04.988004 3127158 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/29097892.pem
	I0729 11:20:04.996485 3127158 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/29097892.pem /etc/ssl/certs/3ec20f2e.0"
	I0729 11:20:05.011410 3127158 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0729 11:20:05.015853 3127158 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0729 11:20:05.015936 3127158 kubeadm.go:392] StartCluster: {Name:embed-certs-483052 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-483052 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 11:20:05.016029 3127158 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0729 11:20:05.016101 3127158 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0729 11:20:05.061045 3127158 cri.go:89] found id: ""
	I0729 11:20:05.061120 3127158 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0729 11:20:05.070840 3127158 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0729 11:20:05.081267 3127158 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0729 11:20:05.081387 3127158 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0729 11:20:05.092162 3127158 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0729 11:20:05.092242 3127158 kubeadm.go:157] found existing configuration files:
	
	I0729 11:20:05.092340 3127158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0729 11:20:05.103141 3127158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0729 11:20:05.103234 3127158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0729 11:20:05.113137 3127158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0729 11:20:05.123629 3127158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0729 11:20:05.123739 3127158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0729 11:20:05.133082 3127158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0729 11:20:05.143070 3127158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0729 11:20:05.143170 3127158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0729 11:20:05.153436 3127158 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0729 11:20:05.163009 3127158 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0729 11:20:05.163077 3127158 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0729 11:20:05.172936 3127158 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0729 11:20:05.225959 3127158 kubeadm.go:310] [init] Using Kubernetes version: v1.30.3
	I0729 11:20:05.226250 3127158 kubeadm.go:310] [preflight] Running pre-flight checks
	I0729 11:20:05.269552 3127158 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0729 11:20:05.269627 3127158 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1065-aws
	I0729 11:20:05.269669 3127158 kubeadm.go:310] OS: Linux
	I0729 11:20:05.269718 3127158 kubeadm.go:310] CGROUPS_CPU: enabled
	I0729 11:20:05.269769 3127158 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0729 11:20:05.269818 3127158 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0729 11:20:05.269868 3127158 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0729 11:20:05.269918 3127158 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0729 11:20:05.269972 3127158 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0729 11:20:05.270019 3127158 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0729 11:20:05.270069 3127158 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0729 11:20:05.270144 3127158 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0729 11:20:05.339030 3127158 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0729 11:20:05.339224 3127158 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0729 11:20:05.339369 3127158 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0729 11:20:05.599707 3127158 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0729 11:20:05.604084 3127158 out.go:204]   - Generating certificates and keys ...
	I0729 11:20:05.604271 3127158 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0729 11:20:05.604373 3127158 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0729 11:20:06.124296 3127158 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0729 11:20:06.958209 3127158 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0729 11:20:08.338434 3116606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 11:20:08.357291 3116606 api_server.go:72] duration metric: took 5m59.814002768s to wait for apiserver process to appear ...
	I0729 11:20:08.357315 3116606 api_server.go:88] waiting for apiserver healthz status ...
	I0729 11:20:08.357357 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0729 11:20:08.357418 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0729 11:20:08.443602 3116606 cri.go:89] found id: "55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded"
	I0729 11:20:08.443628 3116606 cri.go:89] found id: "8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:20:08.443633 3116606 cri.go:89] found id: ""
	I0729 11:20:08.443642 3116606 logs.go:276] 2 containers: [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded 8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738]
	I0729 11:20:08.443713 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.447894 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.456460 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0729 11:20:08.456538 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0729 11:20:08.509962 3116606 cri.go:89] found id: "d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75"
	I0729 11:20:08.509988 3116606 cri.go:89] found id: "587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:20:08.509993 3116606 cri.go:89] found id: ""
	I0729 11:20:08.510000 3116606 logs.go:276] 2 containers: [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75 587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c]
	I0729 11:20:08.510092 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.514390 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.518891 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0729 11:20:08.518976 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0729 11:20:08.589652 3116606 cri.go:89] found id: "ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098"
	I0729 11:20:08.589677 3116606 cri.go:89] found id: "d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:20:08.589682 3116606 cri.go:89] found id: ""
	I0729 11:20:08.589689 3116606 logs.go:276] 2 containers: [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098 d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449]
	I0729 11:20:08.589748 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.594238 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.601491 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0729 11:20:08.601565 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0729 11:20:08.670209 3116606 cri.go:89] found id: "92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a"
	I0729 11:20:08.670234 3116606 cri.go:89] found id: "7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:20:08.670247 3116606 cri.go:89] found id: ""
	I0729 11:20:08.670271 3116606 logs.go:276] 2 containers: [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a 7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e]
	I0729 11:20:08.670369 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.676313 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.682047 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0729 11:20:08.682176 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0729 11:20:08.768290 3116606 cri.go:89] found id: "54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747"
	I0729 11:20:08.768312 3116606 cri.go:89] found id: "b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:20:08.768317 3116606 cri.go:89] found id: ""
	I0729 11:20:08.768324 3116606 logs.go:276] 2 containers: [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747 b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133]
	I0729 11:20:08.768422 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.780515 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.792160 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0729 11:20:08.792258 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0729 11:20:08.939024 3116606 cri.go:89] found id: "8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc"
	I0729 11:20:08.939045 3116606 cri.go:89] found id: "789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:20:08.939050 3116606 cri.go:89] found id: ""
	I0729 11:20:08.939057 3116606 logs.go:276] 2 containers: [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc 789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889]
	I0729 11:20:08.939163 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.943529 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:08.949463 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0729 11:20:08.949547 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0729 11:20:08.999394 3116606 cri.go:89] found id: "be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c"
	I0729 11:20:08.999418 3116606 cri.go:89] found id: "e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:20:08.999423 3116606 cri.go:89] found id: ""
	I0729 11:20:08.999431 3116606 logs.go:276] 2 containers: [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e]
	I0729 11:20:08.999503 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.004747 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.009729 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0729 11:20:09.009812 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0729 11:20:09.070708 3116606 cri.go:89] found id: "0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191"
	I0729 11:20:09.070731 3116606 cri.go:89] found id: ""
	I0729 11:20:09.070739 3116606 logs.go:276] 1 containers: [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191]
	I0729 11:20:09.070808 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.075996 3116606 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0729 11:20:09.076084 3116606 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0729 11:20:09.132498 3116606 cri.go:89] found id: "63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a"
	I0729 11:20:09.132527 3116606 cri.go:89] found id: "c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:20:09.132532 3116606 cri.go:89] found id: ""
	I0729 11:20:09.132539 3116606 logs.go:276] 2 containers: [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4]
	I0729 11:20:09.132621 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.137022 3116606 ssh_runner.go:195] Run: which crictl
	I0729 11:20:09.141269 3116606 logs.go:123] Gathering logs for kubelet ...
	I0729 11:20:09.141305 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0729 11:20:09.248117 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597182     661 reflector.go:138] object-"kube-system"/"kube-proxy-token-7kgps": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-7kgps" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.248357 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597314     661 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.248595 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597368     661 reflector.go:138] object-"kube-system"/"metrics-server-token-jpdkd": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-jpdkd" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.248812 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597412     661 reflector.go:138] object-"default"/"default-token-gc665": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-gc665" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.249020 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597478     661 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.249255 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597531     661 reflector.go:138] object-"kube-system"/"storage-provisioner-token-bnfpv": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-bnfpv" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.249478 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597580     661 reflector.go:138] object-"kube-system"/"coredns-token-gpx2v": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-gpx2v" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.249694 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:25 old-k8s-version-398652 kubelet[661]: E0729 11:14:25.597484     661 reflector.go:138] object-"kube-system"/"kindnet-token-vw6mq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-vw6mq" is forbidden: User "system:node:old-k8s-version-398652" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-398652' and this object
	W0729 11:20:09.259399 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:27 old-k8s-version-398652 kubelet[661]: E0729 11:14:27.458872     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.265320 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:27 old-k8s-version-398652 kubelet[661]: E0729 11:14:27.878876     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.269132 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:41 old-k8s-version-398652 kubelet[661]: E0729 11:14:41.668429     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.271415 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:51 old-k8s-version-398652 kubelet[661]: E0729 11:14:51.984489     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.271788 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:52 old-k8s-version-398652 kubelet[661]: E0729 11:14:52.978054     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.272069 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:53 old-k8s-version-398652 kubelet[661]: E0729 11:14:53.673014     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.272815 3116606 logs.go:138] Found kubelet problem: Jul 29 11:14:57 old-k8s-version-398652 kubelet[661]: E0729 11:14:57.307270     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.275520 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:05 old-k8s-version-398652 kubelet[661]: E0729 11:15:05.669370     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.276519 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:10 old-k8s-version-398652 kubelet[661]: E0729 11:15:10.052366     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.278566 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:17 old-k8s-version-398652 kubelet[661]: E0729 11:15:17.335946     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.278787 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:17 old-k8s-version-398652 kubelet[661]: E0729 11:15:17.660197     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.279403 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:31 old-k8s-version-398652 kubelet[661]: E0729 11:15:31.131386     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.279603 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:31 old-k8s-version-398652 kubelet[661]: E0729 11:15:31.660183     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.279975 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:37 old-k8s-version-398652 kubelet[661]: E0729 11:15:37.307168     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.280170 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:42 old-k8s-version-398652 kubelet[661]: E0729 11:15:42.660275     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.280506 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:51 old-k8s-version-398652 kubelet[661]: E0729 11:15:51.660440     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.283074 3116606 logs.go:138] Found kubelet problem: Jul 29 11:15:54 old-k8s-version-398652 kubelet[661]: E0729 11:15:54.670644     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.283413 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:05 old-k8s-version-398652 kubelet[661]: E0729 11:16:05.659590     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.283604 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:07 old-k8s-version-398652 kubelet[661]: E0729 11:16:07.660440     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.286269 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:19 old-k8s-version-398652 kubelet[661]: E0729 11:16:19.271655     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.286478 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:19 old-k8s-version-398652 kubelet[661]: E0729 11:16:19.669252     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.286816 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:27 old-k8s-version-398652 kubelet[661]: E0729 11:16:27.307974     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.287015 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:34 old-k8s-version-398652 kubelet[661]: E0729 11:16:34.659936     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.287353 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:41 old-k8s-version-398652 kubelet[661]: E0729 11:16:41.659566     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.287543 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:49 old-k8s-version-398652 kubelet[661]: E0729 11:16:49.659815     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.288736 3116606 logs.go:138] Found kubelet problem: Jul 29 11:16:54 old-k8s-version-398652 kubelet[661]: E0729 11:16:54.660512     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.288942 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:02 old-k8s-version-398652 kubelet[661]: E0729 11:17:02.664093     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.289292 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:07 old-k8s-version-398652 kubelet[661]: E0729 11:17:07.659718     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.289488 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:13 old-k8s-version-398652 kubelet[661]: E0729 11:17:13.659955     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.289826 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:22 old-k8s-version-398652 kubelet[661]: E0729 11:17:22.659654     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.292345 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:24 old-k8s-version-398652 kubelet[661]: E0729 11:17:24.668985     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0729 11:20:09.292700 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:34 old-k8s-version-398652 kubelet[661]: E0729 11:17:34.659569     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.292890 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:36 old-k8s-version-398652 kubelet[661]: E0729 11:17:36.661730     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.293078 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:48 old-k8s-version-398652 kubelet[661]: E0729 11:17:48.661201     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.293681 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:50 old-k8s-version-398652 kubelet[661]: E0729 11:17:50.522025     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.294017 3116606 logs.go:138] Found kubelet problem: Jul 29 11:17:57 old-k8s-version-398652 kubelet[661]: E0729 11:17:57.307619     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.294208 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:01 old-k8s-version-398652 kubelet[661]: E0729 11:18:01.660045     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.294544 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:08 old-k8s-version-398652 kubelet[661]: E0729 11:18:08.661963     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.294733 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:16 old-k8s-version-398652 kubelet[661]: E0729 11:18:16.660611     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.295068 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:21 old-k8s-version-398652 kubelet[661]: E0729 11:18:21.659655     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.295256 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:29 old-k8s-version-398652 kubelet[661]: E0729 11:18:29.660643     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.295601 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:32 old-k8s-version-398652 kubelet[661]: E0729 11:18:32.659948     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.295982 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:40 old-k8s-version-398652 kubelet[661]: E0729 11:18:40.660579     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.296323 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:44 old-k8s-version-398652 kubelet[661]: E0729 11:18:44.660231     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.296513 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:51 old-k8s-version-398652 kubelet[661]: E0729 11:18:51.660081     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.296859 3116606 logs.go:138] Found kubelet problem: Jul 29 11:18:55 old-k8s-version-398652 kubelet[661]: E0729 11:18:55.660086     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.297049 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:02 old-k8s-version-398652 kubelet[661]: E0729 11:19:02.660084     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.297385 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:06 old-k8s-version-398652 kubelet[661]: E0729 11:19:06.660508     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.297575 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:17 old-k8s-version-398652 kubelet[661]: E0729 11:19:17.660133     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.297912 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: E0729 11:19:19.659640     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.298101 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:32 old-k8s-version-398652 kubelet[661]: E0729 11:19:32.660866     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.298437 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.298629 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.298964 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:09.299158 3116606 logs.go:138] Found kubelet problem: Jul 29 11:19:58 old-k8s-version-398652 kubelet[661]: E0729 11:19:58.666044     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:09.299494 3116606 logs.go:138] Found kubelet problem: Jul 29 11:20:02 old-k8s-version-398652 kubelet[661]: E0729 11:20:02.660579     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	I0729 11:20:09.299504 3116606 logs.go:123] Gathering logs for etcd [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75] ...
	I0729 11:20:09.299518 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75"
	I0729 11:20:09.373979 3116606 logs.go:123] Gathering logs for kindnet [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c] ...
	I0729 11:20:09.374044 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c"
	I0729 11:20:09.477108 3116606 logs.go:123] Gathering logs for kube-scheduler [7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e] ...
	I0729 11:20:09.477158 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e"
	I0729 11:20:09.557836 3116606 logs.go:123] Gathering logs for kube-controller-manager [789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889] ...
	I0729 11:20:09.558052 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889"
	I0729 11:20:07.627938 3127158 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0729 11:20:08.673128 3127158 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0729 11:20:09.102042 3127158 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0729 11:20:09.102176 3127158 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-483052 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0729 11:20:09.611713 3127158 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0729 11:20:09.613760 3127158 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-483052 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0729 11:20:10.313695 3127158 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0729 11:20:10.978710 3127158 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0729 11:20:11.911342 3127158 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0729 11:20:11.911699 3127158 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0729 11:20:12.872084 3127158 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0729 11:20:13.305066 3127158 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0729 11:20:13.807550 3127158 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0729 11:20:14.143306 3127158 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0729 11:20:14.469109 3127158 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0729 11:20:14.469909 3127158 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0729 11:20:14.472917 3127158 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0729 11:20:09.642070 3116606 logs.go:123] Gathering logs for kindnet [e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e] ...
	I0729 11:20:09.642145 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e"
	I0729 11:20:09.751747 3116606 logs.go:123] Gathering logs for dmesg ...
	I0729 11:20:09.751827 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0729 11:20:09.779246 3116606 logs.go:123] Gathering logs for describe nodes ...
	I0729 11:20:09.779276 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0729 11:20:09.988272 3116606 logs.go:123] Gathering logs for kube-apiserver [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded] ...
	I0729 11:20:09.988348 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded"
	I0729 11:20:10.084649 3116606 logs.go:123] Gathering logs for kube-apiserver [8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738] ...
	I0729 11:20:10.084736 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738"
	I0729 11:20:10.182842 3116606 logs.go:123] Gathering logs for kube-scheduler [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a] ...
	I0729 11:20:10.182931 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a"
	I0729 11:20:10.299400 3116606 logs.go:123] Gathering logs for containerd ...
	I0729 11:20:10.299424 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0729 11:20:10.366583 3116606 logs.go:123] Gathering logs for kube-controller-manager [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc] ...
	I0729 11:20:10.366659 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc"
	I0729 11:20:10.454249 3116606 logs.go:123] Gathering logs for storage-provisioner [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a] ...
	I0729 11:20:10.454348 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a"
	I0729 11:20:10.519773 3116606 logs.go:123] Gathering logs for container status ...
	I0729 11:20:10.519822 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0729 11:20:10.579925 3116606 logs.go:123] Gathering logs for etcd [587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c] ...
	I0729 11:20:10.579972 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c"
	I0729 11:20:10.645837 3116606 logs.go:123] Gathering logs for coredns [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098] ...
	I0729 11:20:10.645871 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098"
	I0729 11:20:10.710387 3116606 logs.go:123] Gathering logs for coredns [d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449] ...
	I0729 11:20:10.710423 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449"
	I0729 11:20:10.763078 3116606 logs.go:123] Gathering logs for kube-proxy [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747] ...
	I0729 11:20:10.763114 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747"
	I0729 11:20:10.819727 3116606 logs.go:123] Gathering logs for kube-proxy [b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133] ...
	I0729 11:20:10.819756 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133"
	I0729 11:20:10.876517 3116606 logs.go:123] Gathering logs for kubernetes-dashboard [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191] ...
	I0729 11:20:10.876546 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191"
	I0729 11:20:10.961902 3116606 logs.go:123] Gathering logs for storage-provisioner [c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4] ...
	I0729 11:20:10.961934 3116606 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4"
	I0729 11:20:11.048902 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:20:11.048930 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0729 11:20:11.048991 3116606 out.go:239] X Problems detected in kubelet:
	W0729 11:20:11.049007 3116606 out.go:239]   Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:11.049015 3116606 out.go:239]   Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:11.049024 3116606 out.go:239]   Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	W0729 11:20:11.049157 3116606 out.go:239]   Jul 29 11:19:58 old-k8s-version-398652 kubelet[661]: E0729 11:19:58.666044     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0729 11:20:11.049170 3116606 out.go:239]   Jul 29 11:20:02 old-k8s-version-398652 kubelet[661]: E0729 11:20:02.660579     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	I0729 11:20:11.049180 3116606 out.go:304] Setting ErrFile to fd 2...
	I0729 11:20:11.049187 3116606 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:20:14.475217 3127158 out.go:204]   - Booting up control plane ...
	I0729 11:20:14.475320 3127158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0729 11:20:14.475418 3127158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0729 11:20:14.476310 3127158 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0729 11:20:14.487423 3127158 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0729 11:20:14.489579 3127158 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0729 11:20:14.489639 3127158 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0729 11:20:14.608332 3127158 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0729 11:20:14.608418 3127158 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0729 11:20:15.606285 3127158 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001745985s
	I0729 11:20:15.606379 3127158 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0729 11:20:21.050100 3116606 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0729 11:20:21.060161 3116606 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0729 11:20:21.063295 3116606 out.go:177] 
	W0729 11:20:21.065653 3116606 out.go:239] X Exiting due to K8S_UNHEALTHY_CONTROL_PLANE: wait 6m0s for node: wait for healthy API server: controlPlane never updated to v1.20.0
	W0729 11:20:21.065698 3116606 out.go:239] * Suggestion: Control Plane could not update, try minikube delete --all --purge
	W0729 11:20:21.065717 3116606 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/11417
	W0729 11:20:21.065725 3116606 out.go:239] * 
	W0729 11:20:21.067108 3116606 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0729 11:20:21.068724 3116606 out.go:177] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	c30481ad3a69a       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   7fcdd900fbd39       dashboard-metrics-scraper-8d5bb5db8-dwnhw
	0afb69ae0e699       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   6deb75e10392e       kubernetes-dashboard-cd95d586-949wc
	be4fb3954f919       f42786f8afd22       5 minutes ago       Running             kindnet-cni                 1                   65faece3ece2c       kindnet-q2t7q
	0c75a4c0b37f4       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   4c3b27733780e       busybox
	ac883e66c537e       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   a1ab0790f9d33       coredns-74ff55c5b-tx9sc
	63ccc5a016621       ba04bb24b9575       5 minutes ago       Running             storage-provisioner         1                   50efe174af4a1       storage-provisioner
	54ffb19a0292e       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   01fe4eed27dbe       kube-proxy-jzn6w
	d855c664b20f2       05b738aa1bc63       6 minutes ago       Running             etcd                        1                   03d8b2d28f471       etcd-old-k8s-version-398652
	8ccafb224e43a       1df8a2b116bd1       6 minutes ago       Running             kube-controller-manager     1                   48b00900aeb15       kube-controller-manager-old-k8s-version-398652
	92e67f37a7b9d       e7605f88f17d6       6 minutes ago       Running             kube-scheduler              1                   6220115fbac37       kube-scheduler-old-k8s-version-398652
	55eabc6b310d1       2c08bbbc02d3a       6 minutes ago       Running             kube-apiserver              1                   72fe390c41b97       kube-apiserver-old-k8s-version-398652
	b9bc36aa42f33       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   4da256efb1af0       busybox
	d8094d57752de       db91994f4ee8f       8 minutes ago       Exited              coredns                     0                   e9c0549b9d9d4       coredns-74ff55c5b-tx9sc
	e47e4b203143f       f42786f8afd22       8 minutes ago       Exited              kindnet-cni                 0                   4261ae4f9b193       kindnet-q2t7q
	c353bab52107d       ba04bb24b9575       8 minutes ago       Exited              storage-provisioner         0                   08b3ee4b4ede1       storage-provisioner
	b2c3fad36616c       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   14764f19e956e       kube-proxy-jzn6w
	587b9ef1a6207       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   3fefc813d8a68       etcd-old-k8s-version-398652
	7743ce5235b56       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   90b60f4265104       kube-scheduler-old-k8s-version-398652
	8db7d55daf4e8       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   861c20fd3d5db       kube-apiserver-old-k8s-version-398652
	789c7fdc7b8aa       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   ab2c8d280fc11       kube-controller-manager-old-k8s-version-398652
	
	
	==> containerd <==
	Jul 29 11:16:18 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:16:18.684374038Z" level=info msg="CreateContainer within sandbox \"7fcdd900fbd394ce562f5495153138027954a6b0dbe1a17b5cb1facb5c45d9ff\" for name:\"dashboard-metrics-scraper\" attempt:4 returns container id \"36f56410ee270e7bf78167f7bee4a6d0f6c5157b9a871b29492c03e6dcce1e77\""
	Jul 29 11:16:18 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:16:18.685558648Z" level=info msg="StartContainer for \"36f56410ee270e7bf78167f7bee4a6d0f6c5157b9a871b29492c03e6dcce1e77\""
	Jul 29 11:16:18 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:16:18.757343315Z" level=info msg="StartContainer for \"36f56410ee270e7bf78167f7bee4a6d0f6c5157b9a871b29492c03e6dcce1e77\" returns successfully"
	Jul 29 11:16:18 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:16:18.787103109Z" level=info msg="shim disconnected" id=36f56410ee270e7bf78167f7bee4a6d0f6c5157b9a871b29492c03e6dcce1e77 namespace=k8s.io
	Jul 29 11:16:18 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:16:18.787166690Z" level=warning msg="cleaning up after shim disconnected" id=36f56410ee270e7bf78167f7bee4a6d0f6c5157b9a871b29492c03e6dcce1e77 namespace=k8s.io
	Jul 29 11:16:18 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:16:18.787177201Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 29 11:16:19 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:16:19.273420498Z" level=info msg="RemoveContainer for \"c94a5d83336a4d3d0f342786bcac50dacd43b3123ea7c9d91479e7bdc4d7ad7c\""
	Jul 29 11:16:19 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:16:19.279776687Z" level=info msg="RemoveContainer for \"c94a5d83336a4d3d0f342786bcac50dacd43b3123ea7c9d91479e7bdc4d7ad7c\" returns successfully"
	Jul 29 11:17:24 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:24.660919429Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 29 11:17:24 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:24.666120661Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Jul 29 11:17:24 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:24.668161161Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jul 29 11:17:24 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:24.668250260Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Jul 29 11:17:49 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:49.661407823Z" level=info msg="CreateContainer within sandbox \"7fcdd900fbd394ce562f5495153138027954a6b0dbe1a17b5cb1facb5c45d9ff\" for container name:\"dashboard-metrics-scraper\" attempt:5"
	Jul 29 11:17:49 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:49.676960501Z" level=info msg="CreateContainer within sandbox \"7fcdd900fbd394ce562f5495153138027954a6b0dbe1a17b5cb1facb5c45d9ff\" for name:\"dashboard-metrics-scraper\" attempt:5 returns container id \"c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4\""
	Jul 29 11:17:49 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:49.677686635Z" level=info msg="StartContainer for \"c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4\""
	Jul 29 11:17:49 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:49.746227710Z" level=info msg="StartContainer for \"c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4\" returns successfully"
	Jul 29 11:17:49 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:49.769381504Z" level=info msg="shim disconnected" id=c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4 namespace=k8s.io
	Jul 29 11:17:49 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:49.769440770Z" level=warning msg="cleaning up after shim disconnected" id=c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4 namespace=k8s.io
	Jul 29 11:17:49 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:49.769452125Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Jul 29 11:17:50 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:50.524748350Z" level=info msg="RemoveContainer for \"36f56410ee270e7bf78167f7bee4a6d0f6c5157b9a871b29492c03e6dcce1e77\""
	Jul 29 11:17:50 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:17:50.531181019Z" level=info msg="RemoveContainer for \"36f56410ee270e7bf78167f7bee4a6d0f6c5157b9a871b29492c03e6dcce1e77\" returns successfully"
	Jul 29 11:20:09 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:20:09.664492379Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 29 11:20:09 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:20:09.679943315Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Jul 29 11:20:09 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:20:09.690943793Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jul 29 11:20:09 old-k8s-version-398652 containerd[568]: time="2024-07-29T11:20:09.691387778Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [ac883e66c537e35bc5030b86851432ea59b4a9c103d84e4ca5b61faffade7098] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:54206 - 35266 "HINFO IN 244849776497052140.6152515205915740578. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021416404s
	
	
	==> coredns [d8094d57752deded43c4f1971f720e95945f0e8e8bd5e4a2575c116f7dc73449] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:49801 - 38705 "HINFO IN 3502726009889333312.4564538022650334914. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.035154997s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-398652
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-398652
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f19ff4e08911d7fac9ac213eb2a365a93d960638
	                    minikube.k8s.io/name=old-k8s-version-398652
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_29T11_11_37_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Jul 2024 11:11:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-398652
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Jul 2024 11:20:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Jul 2024 11:15:26 +0000   Mon, 29 Jul 2024 11:11:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Jul 2024 11:15:26 +0000   Mon, 29 Jul 2024 11:11:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Jul 2024 11:15:26 +0000   Mon, 29 Jul 2024 11:11:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Jul 2024 11:15:26 +0000   Mon, 29 Jul 2024 11:11:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-398652
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022364Ki
	  pods:               110
	System Info:
	  Machine ID:                 5359855dc4734d1c905e9729f4b1eefc
	  System UUID:                5e4371d5-20b5-4bc5-9ef8-25a190475e13
	  Boot ID:                    9d805461-0494-4168-a7a3-1fdbd78d16da
	  Kernel Version:             5.15.0-1065-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.19
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m45s
	  kube-system                 coredns-74ff55c5b-tx9sc                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (2%!)(MISSING)     8m30s
	  kube-system                 etcd-old-k8s-version-398652                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m37s
	  kube-system                 kindnet-q2t7q                                     100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m30s
	  kube-system                 kube-apiserver-old-k8s-version-398652             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-controller-manager-old-k8s-version-398652    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 kube-proxy-jzn6w                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m30s
	  kube-system                 kube-scheduler-old-k8s-version-398652             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m37s
	  kube-system                 metrics-server-9975d5f86-c578w                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (2%!)(MISSING)       0 (0%!)(MISSING)         6m35s
	  kube-system                 storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-dwnhw         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-949wc               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%!)(MISSING)  100m (5%!)(MISSING)
	  memory             420Mi (5%!)(MISSING)  220Mi (2%!)(MISSING)
	  ephemeral-storage  100Mi (0%!)(MISSING)  0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-32Mi     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-64Ki     0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  8m57s (x4 over 8m57s)  kubelet     Node old-k8s-version-398652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m57s (x4 over 8m57s)  kubelet     Node old-k8s-version-398652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m57s (x5 over 8m57s)  kubelet     Node old-k8s-version-398652 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m38s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m38s                  kubelet     Node old-k8s-version-398652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m38s                  kubelet     Node old-k8s-version-398652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m38s                  kubelet     Node old-k8s-version-398652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m37s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m30s                  kubelet     Node old-k8s-version-398652 status is now: NodeReady
	  Normal  Starting                 8m29s                  kube-proxy  Starting kube-proxy.
	  Normal  Starting                 6m6s                   kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m6s (x8 over 6m6s)    kubelet     Node old-k8s-version-398652 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m6s (x8 over 6m6s)    kubelet     Node old-k8s-version-398652 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m6s (x7 over 6m6s)    kubelet     Node old-k8s-version-398652 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m6s                   kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m55s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[  +0.001030] FS-Cache: O-key=[8] 'a84c5c0100000000'
	[  +0.000707] FS-Cache: N-cookie c=000000c0 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000933] FS-Cache: N-cookie d=00000000246b9f20{9p.inode} n=0000000012282f8a
	[  +0.001046] FS-Cache: N-key=[8] 'a84c5c0100000000'
	[  +0.012778] FS-Cache: Duplicate cookie detected
	[  +0.000690] FS-Cache: O-cookie c=000000ba [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000980] FS-Cache: O-cookie d=00000000246b9f20{9p.inode} n=0000000020633c42
	[  +0.001089] FS-Cache: O-key=[8] 'a84c5c0100000000'
	[  +0.000697] FS-Cache: N-cookie c=000000c1 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000928] FS-Cache: N-cookie d=00000000246b9f20{9p.inode} n=00000000d6fa93af
	[  +0.001079] FS-Cache: N-key=[8] 'a84c5c0100000000'
	[  +2.691831] FS-Cache: Duplicate cookie detected
	[  +0.000687] FS-Cache: O-cookie c=000000b8 [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000987] FS-Cache: O-cookie d=00000000246b9f20{9p.inode} n=000000004adf4744
	[  +0.001028] FS-Cache: O-key=[8] 'a74c5c0100000000'
	[  +0.000706] FS-Cache: N-cookie c=000000c3 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000930] FS-Cache: N-cookie d=00000000246b9f20{9p.inode} n=000000004ad11536
	[  +0.001045] FS-Cache: N-key=[8] 'a74c5c0100000000'
	[  +0.373455] FS-Cache: Duplicate cookie detected
	[  +0.000708] FS-Cache: O-cookie c=000000bd [p=000000b7 fl=226 nc=0 na=1]
	[  +0.000953] FS-Cache: O-cookie d=00000000246b9f20{9p.inode} n=000000004f445661
	[  +0.001074] FS-Cache: O-key=[8] 'ad4c5c0100000000'
	[  +0.000696] FS-Cache: N-cookie c=000000c4 [p=000000b7 fl=2 nc=0 na=1]
	[  +0.000926] FS-Cache: N-cookie d=00000000246b9f20{9p.inode} n=00000000a6d43a0e
	[  +0.001036] FS-Cache: N-key=[8] 'ad4c5c0100000000'
	
	
	==> etcd [587b9ef1a62073411270ee8720a4b580bb9466a8ed4aee8f1f4ef0f09e399e7c] <==
	raft2024/07/29 11:11:27 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/07/29 11:11:27 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/07/29 11:11:27 INFO: ea7e25599daad906 became leader at term 2
	raft2024/07/29 11:11:27 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-07-29 11:11:27.592286 I | etcdserver: published {Name:old-k8s-version-398652 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-07-29 11:11:27.592333 I | embed: ready to serve client requests
	2024-07-29 11:11:27.594195 I | embed: serving client requests on 127.0.0.1:2379
	2024-07-29 11:11:27.594429 I | etcdserver: setting up the initial cluster version to 3.4
	2024-07-29 11:11:27.594621 I | embed: ready to serve client requests
	2024-07-29 11:11:27.600096 I | embed: serving client requests on 192.168.76.2:2379
	2024-07-29 11:11:27.628204 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-07-29 11:11:27.628470 I | etcdserver/api: enabled capabilities for version 3.4
	2024-07-29 11:11:51.546827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:11:52.864744 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:12:02.864827 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:12:12.864369 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:12:22.864242 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:12:32.864422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:12:42.864330 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:12:52.864378 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:13:02.875254 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:13:12.864638 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:13:22.864444 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:13:32.864422 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:13:42.864360 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [d855c664b20f282851a23aa13af697ef4f539406374e1c860c26597b84f8ee75] <==
	2024-07-29 11:16:17.764584 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:16:27.764650 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:16:37.764744 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:16:47.765170 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:16:57.764834 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:17:07.764947 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:17:17.764753 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:17:27.764839 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:17:37.764695 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:17:47.764703 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:17:57.764657 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:18:07.764756 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:18:17.764741 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:18:27.764759 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:18:37.764689 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:18:47.764734 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:18:57.764634 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:19:07.764675 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:19:17.764601 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:19:27.764840 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:19:37.764587 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:19:47.764752 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:19:57.764766 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:20:07.764854 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-07-29 11:20:17.764736 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 11:20:23 up 19:02,  0 users,  load average: 1.97, 1.64, 2.18
	Linux old-k8s-version-398652 5.15.0-1065-aws #71~20.04.1-Ubuntu SMP Fri Jun 28 19:59:49 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [be4fb3954f9193d0577447927a1b728347ba8abdcfffe06990bb5d05b6c8f49c] <==
	I0729 11:19:10.424587       1 main.go:299] handling current node
	I0729 11:19:20.424745       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:19:20.424811       1 main.go:299] handling current node
	W0729 11:19:26.454581       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:19:26.454627       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0729 11:19:30.424767       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:19:30.425143       1 main.go:299] handling current node
	W0729 11:19:30.572031       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0729 11:19:30.572067       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0729 11:19:40.424619       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:19:40.424663       1 main.go:299] handling current node
	I0729 11:19:50.425098       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:19:50.425142       1 main.go:299] handling current node
	W0729 11:19:54.081353       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:19:54.081547       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0729 11:20:00.436633       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:20:00.436754       1 main.go:299] handling current node
	W0729 11:20:02.982678       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0729 11:20:02.982715       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0729 11:20:10.424529       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:20:10.424586       1 main.go:299] handling current node
	W0729 11:20:10.670612       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:20:10.670650       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0729 11:20:20.424849       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:20:20.424968       1 main.go:299] handling current node
	
	
	==> kindnet [e47e4b203143f4c04a2625539152adf493fbd66f0141c8fa35d67c0eb9dcd15e] <==
	E0729 11:12:32.806557       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0729 11:12:36.837250       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:12:36.837291       1 main.go:299] handling current node
	W0729 11:12:38.419941       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:12:38.420126       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0729 11:12:46.837776       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:12:46.838003       1 main.go:299] handling current node
	I0729 11:12:56.837431       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:12:56.837471       1 main.go:299] handling current node
	I0729 11:13:06.837828       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:13:06.837871       1 main.go:299] handling current node
	W0729 11:13:06.872592       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0729 11:13:06.872635       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0729 11:13:16.837544       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:13:16.837582       1 main.go:299] handling current node
	W0729 11:13:20.841285       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0729 11:13:20.841383       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	W0729 11:13:23.491732       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:13:23.492042       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.2/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0729 11:13:26.838181       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:13:26.838221       1 main.go:299] handling current node
	I0729 11:13:36.840255       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:13:36.840293       1 main.go:299] handling current node
	I0729 11:13:46.839241       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0729 11:13:46.839334       1 main.go:299] handling current node
	
	
	==> kube-apiserver [55eabc6b310d11652dacd8619d5c8576e4a8dd6e56b763e6f5f40bd868a7aded] <==
	I0729 11:17:19.409183       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:17:19.409192       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0729 11:17:28.285132       1 handler_proxy.go:102] no RequestInfo found in the context
	E0729 11:17:28.285249       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 11:17:28.285261       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 11:17:55.429770       1 client.go:360] parsed scheme: "passthrough"
	I0729 11:17:55.429819       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:17:55.429828       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0729 11:18:27.894180       1 client.go:360] parsed scheme: "passthrough"
	I0729 11:18:27.894224       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:18:27.894233       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0729 11:18:58.336415       1 client.go:360] parsed scheme: "passthrough"
	I0729 11:18:58.336469       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:18:58.336477       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0729 11:19:26.769216       1 handler_proxy.go:102] no RequestInfo found in the context
	E0729 11:19:26.769301       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0729 11:19:26.769312       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0729 11:19:40.737783       1 client.go:360] parsed scheme: "passthrough"
	I0729 11:19:40.737844       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:19:40.737996       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0729 11:20:21.684323       1 client.go:360] parsed scheme: "passthrough"
	I0729 11:20:21.684370       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:20:21.684378       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [8db7d55daf4e8f1f7c356410dce4fc8bfe4e73b58c73519316918d020f07a738] <==
	I0729 11:11:34.256476       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0729 11:11:34.256502       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0729 11:11:34.274137       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0729 11:11:34.280112       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0729 11:11:34.280188       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0729 11:11:34.766717       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0729 11:11:34.807287       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0729 11:11:34.893547       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0729 11:11:34.894716       1 controller.go:606] quota admission added evaluator for: endpoints
	I0729 11:11:34.898859       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0729 11:11:35.907438       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0729 11:11:36.357233       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0729 11:11:36.416037       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0729 11:11:44.906301       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0729 11:11:52.035058       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0729 11:11:52.037322       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0729 11:12:03.537724       1 client.go:360] parsed scheme: "passthrough"
	I0729 11:12:03.537766       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:12:03.537774       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0729 11:12:44.263547       1 client.go:360] parsed scheme: "passthrough"
	I0729 11:12:44.263593       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:12:44.263603       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0729 11:13:22.618701       1 client.go:360] parsed scheme: "passthrough"
	I0729 11:13:22.618971       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0729 11:13:22.618998       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [789c7fdc7b8aac104b10d2c1cca0c6ce267d3325a6305aaea9f9af92bab8c889] <==
	I0729 11:11:52.226291       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jzn6w"
	I0729 11:11:52.226972       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-4gg87"
	I0729 11:11:52.250739       1 shared_informer.go:247] Caches are synced for taint 
	I0729 11:11:52.251069       1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: 
	W0729 11:11:52.251254       1 node_lifecycle_controller.go:1044] Missing timestamp for Node old-k8s-version-398652. Assuming now as a timestamp.
	I0729 11:11:52.251459       1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
	I0729 11:11:52.251636       1 event.go:291] "Event occurred" object="old-k8s-version-398652" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-398652 event: Registered Node old-k8s-version-398652 in Controller"
	I0729 11:11:52.251758       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0729 11:11:52.253753       1 shared_informer.go:247] Caches are synced for resource quota 
	I0729 11:11:52.271639       1 shared_informer.go:247] Caches are synced for resource quota 
	I0729 11:11:52.275856       1 shared_informer.go:247] Caches are synced for attach detach 
	I0729 11:11:52.275884       1 shared_informer.go:247] Caches are synced for job 
	I0729 11:11:52.303628       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-tx9sc"
	I0729 11:11:52.304825       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q2t7q"
	I0729 11:11:52.355393       1 event.go:291] "Event occurred" object="kube-system/kube-apiserver-old-k8s-version-398652" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0729 11:11:52.458189       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0729 11:11:52.666461       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0729 11:11:52.694296       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0729 11:11:52.694320       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0729 11:11:53.891710       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0729 11:11:53.909607       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-4gg87"
	I0729 11:11:57.251847       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0729 11:13:46.270361       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0729 11:13:46.406216       1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
	E0729 11:13:46.406871       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-controller-manager [8ccafb224e43a5b6518db9936d1dc9fd44a73e2192879bb5bf0f3ce3b4d175cc] <==
	E0729 11:16:16.300307       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0729 11:16:22.028120       1 request.go:655] Throttling request took 1.048403986s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0729 11:16:22.879708       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0729 11:16:46.861002       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0729 11:16:54.530297       1 request.go:655] Throttling request took 1.048395424s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0729 11:16:55.381643       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0729 11:17:17.363176       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0729 11:17:27.032013       1 request.go:655] Throttling request took 1.048313802s, request: GET:https://192.168.76.2:8443/apis/autoscaling/v2beta1?timeout=32s
	W0729 11:17:27.885595       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0729 11:17:47.865187       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0729 11:17:59.535974       1 request.go:655] Throttling request took 1.048044816s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0729 11:18:00.388098       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0729 11:18:18.366880       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0729 11:18:32.038549       1 request.go:655] Throttling request took 1.048399793s, request: GET:https://192.168.76.2:8443/apis/coordination.k8s.io/v1?timeout=32s
	W0729 11:18:32.890154       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0729 11:18:48.868805       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0729 11:19:04.540727       1 request.go:655] Throttling request took 1.047868594s, request: GET:https://192.168.76.2:8443/apis/authentication.k8s.io/v1?timeout=32s
	W0729 11:19:05.392251       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0729 11:19:19.370770       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0729 11:19:37.042742       1 request.go:655] Throttling request took 1.048158798s, request: GET:https://192.168.76.2:8443/apis/policy/v1beta1?timeout=32s
	W0729 11:19:37.894634       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0729 11:19:49.872786       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0729 11:20:09.545326       1 request.go:655] Throttling request took 1.0480663s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0729 11:20:10.397253       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0729 11:20:20.374521       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [54ffb19a0292eb77b61a76c3728fb619af5c455bf9ff1241a21b0069be4e8747] <==
	I0729 11:14:27.856057       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0729 11:14:27.856164       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0729 11:14:27.896339       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0729 11:14:27.896426       1 server_others.go:185] Using iptables Proxier.
	I0729 11:14:27.896636       1 server.go:650] Version: v1.20.0
	I0729 11:14:27.898816       1 config.go:315] Starting service config controller
	I0729 11:14:27.898828       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0729 11:14:27.898869       1 config.go:224] Starting endpoint slice config controller
	I0729 11:14:27.898873       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0729 11:14:27.999034       1 shared_informer.go:247] Caches are synced for service config 
	I0729 11:14:27.999002       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [b2c3fad36616c573babfc67ee709885d5905cf5a54593886a6f579147c8ce133] <==
	I0729 11:11:53.172261       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0729 11:11:53.172347       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0729 11:11:53.201532       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0729 11:11:53.201670       1 server_others.go:185] Using iptables Proxier.
	I0729 11:11:53.201890       1 server.go:650] Version: v1.20.0
	I0729 11:11:53.205226       1 config.go:224] Starting endpoint slice config controller
	I0729 11:11:53.205241       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0729 11:11:53.205346       1 config.go:315] Starting service config controller
	I0729 11:11:53.205350       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0729 11:11:53.308037       1 shared_informer.go:247] Caches are synced for service config 
	I0729 11:11:53.308099       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [7743ce5235b563b5fef6aed42a02b9652010558f0c0bca72fdd35f7237352e4e] <==
	W0729 11:11:33.402354       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:11:33.503354       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0729 11:11:33.503646       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:11:33.503734       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:11:33.504751       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0729 11:11:33.518740       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0729 11:11:33.523328       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:11:33.523720       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0729 11:11:33.537605       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:11:33.537721       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0729 11:11:33.537805       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:11:33.537882       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:11:33.537965       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:11:33.538039       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0729 11:11:33.538110       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:11:33.538202       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0729 11:11:33.538289       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0729 11:11:34.384171       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0729 11:11:34.421961       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0729 11:11:34.433084       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0729 11:11:34.454839       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0729 11:11:34.521862       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0729 11:11:34.535564       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0729 11:11:34.595440       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0729 11:11:34.912613       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [92e67f37a7b9d727171d0240a5fde8b95850b192051b0f809bbe087f8c7de33a] <==
	I0729 11:14:21.938488       1 serving.go:331] Generated self-signed cert in-memory
	W0729 11:14:25.663988       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0729 11:14:25.664046       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0729 11:14:25.664060       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0729 11:14:25.664065       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0729 11:14:25.886907       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:14:25.886946       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0729 11:14:25.889165       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0729 11:14:25.904582       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0729 11:14:25.987916       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Jul 29 11:18:51 old-k8s-version-398652 kubelet[661]: E0729 11:18:51.660081     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 29 11:18:55 old-k8s-version-398652 kubelet[661]: I0729 11:18:55.659261     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4
	Jul 29 11:18:55 old-k8s-version-398652 kubelet[661]: E0729 11:18:55.660086     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	Jul 29 11:19:02 old-k8s-version-398652 kubelet[661]: E0729 11:19:02.660084     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 29 11:19:06 old-k8s-version-398652 kubelet[661]: I0729 11:19:06.659667     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4
	Jul 29 11:19:06 old-k8s-version-398652 kubelet[661]: E0729 11:19:06.660508     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	Jul 29 11:19:17 old-k8s-version-398652 kubelet[661]: E0729 11:19:17.660133     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: I0729 11:19:19.659294     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4
	Jul 29 11:19:19 old-k8s-version-398652 kubelet[661]: E0729 11:19:19.659640     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	Jul 29 11:19:32 old-k8s-version-398652 kubelet[661]: E0729 11:19:32.660866     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: I0729 11:19:34.659285     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4
	Jul 29 11:19:34 old-k8s-version-398652 kubelet[661]: E0729 11:19:34.659747     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	Jul 29 11:19:45 old-k8s-version-398652 kubelet[661]: E0729 11:19:45.659987     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: I0729 11:19:47.659399     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4
	Jul 29 11:19:47 old-k8s-version-398652 kubelet[661]: E0729 11:19:47.660198     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	Jul 29 11:19:58 old-k8s-version-398652 kubelet[661]: E0729 11:19:58.666044     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jul 29 11:20:02 old-k8s-version-398652 kubelet[661]: I0729 11:20:02.659361     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4
	Jul 29 11:20:02 old-k8s-version-398652 kubelet[661]: E0729 11:20:02.660579     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	Jul 29 11:20:09 old-k8s-version-398652 kubelet[661]: E0729 11:20:09.691364     661 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jul 29 11:20:09 old-k8s-version-398652 kubelet[661]: E0729 11:20:09.691450     661 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jul 29 11:20:09 old-k8s-version-398652 kubelet[661]: E0729 11:20:09.691658     661 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-jpdkd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-c578w_kube-system(e474d19
1-1f6c-4baf-8622-05a678b0c38c): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Jul 29 11:20:09 old-k8s-version-398652 kubelet[661]: E0729 11:20:09.691692     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Jul 29 11:20:13 old-k8s-version-398652 kubelet[661]: I0729 11:20:13.659236     661 scope.go:95] [topologymanager] RemoveContainer - Container ID: c30481ad3a69aa82b534d6e2b4ddc96ed457a530c96b9a26b461e4bc8578b4a4
	Jul 29 11:20:13 old-k8s-version-398652 kubelet[661]: E0729 11:20:13.659606     661 pod_workers.go:191] Error syncing pod 139278e5-1e2b-4ecc-92ed-a8f9113a7048 ("dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-dwnhw_kubernetes-dashboard(139278e5-1e2b-4ecc-92ed-a8f9113a7048)"
	Jul 29 11:20:20 old-k8s-version-398652 kubelet[661]: E0729 11:20:20.668108     661 pod_workers.go:191] Error syncing pod e474d191-1f6c-4baf-8622-05a678b0c38c ("metrics-server-9975d5f86-c578w_kube-system(e474d191-1f6c-4baf-8622-05a678b0c38c)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> kubernetes-dashboard [0afb69ae0e699da6d8df0dbfb7b284327d738087f9b4ba1a283917462e4ff191] <==
	2024/07/29 11:14:53 Using namespace: kubernetes-dashboard
	2024/07/29 11:14:53 Using in-cluster config to connect to apiserver
	2024/07/29 11:14:53 Using secret token for csrf signing
	2024/07/29 11:14:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/07/29 11:14:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/07/29 11:14:53 Successful initial request to the apiserver, version: v1.20.0
	2024/07/29 11:14:53 Generating JWE encryption key
	2024/07/29 11:14:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/07/29 11:14:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/07/29 11:14:54 Initializing JWE encryption key from synchronized object
	2024/07/29 11:14:54 Creating in-cluster Sidecar client
	2024/07/29 11:14:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:14:54 Serving insecurely on HTTP port: 9090
	2024/07/29 11:15:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:15:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:16:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:16:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:17:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:17:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:18:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:18:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:19:24 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:19:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/07/29 11:14:53 Starting overwatch
	
	
	==> storage-provisioner [63ccc5a016621ddee17a12e23e7873395935fcf7d04f3ffabff8ba671927254a] <==
	I0729 11:14:28.117821       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:14:28.130454       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:14:28.130625       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:14:45.572666       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:14:45.573144       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-398652_67b071a1-059c-4bfb-9a6b-2571d0f6365a!
	I0729 11:14:45.573627       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7388e5c-78f5-46c8-901b-71e37a5c1688", APIVersion:"v1", ResourceVersion:"781", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-398652_67b071a1-059c-4bfb-9a6b-2571d0f6365a became leader
	I0729 11:14:45.674135       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-398652_67b071a1-059c-4bfb-9a6b-2571d0f6365a!
	
	
	==> storage-provisioner [c353bab52107db86c72f21b2699f5c44a9e22f17ce40f5d83659ce4f08e9b3d4] <==
	I0729 11:11:54.334671       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0729 11:11:54.365635       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0729 11:11:54.366104       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0729 11:11:54.380729       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0729 11:11:54.380892       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-398652_91ee4011-adbb-4ef1-9250-368ca55065ee!
	I0729 11:11:54.381272       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7388e5c-78f5-46c8-901b-71e37a5c1688", APIVersion:"v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-398652_91ee4011-adbb-4ef1-9250-368ca55065ee became leader
	I0729 11:11:54.481205       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-398652_91ee4011-adbb-4ef1-9250-368ca55065ee!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-398652 -n old-k8s-version-398652
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-398652 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-c578w
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-398652 describe pod metrics-server-9975d5f86-c578w
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-398652 describe pod metrics-server-9975d5f86-c578w: exit status 1 (117.939218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-c578w" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-398652 describe pod metrics-server-9975d5f86-c578w: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (385.55s)

                                                
                                    

Test pass (303/336)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.22
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.30.3/json-events 7.37
13 TestDownloadOnly/v1.30.3/preload-exists 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.07
18 TestDownloadOnly/v1.30.3/DeleteAll 0.22
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.31.0-beta.0/json-events 8.02
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.26
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.39
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.22
30 TestBinaryMirror 0.54
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 160.38
40 TestAddons/serial/GCPAuth/Namespaces 0.16
42 TestAddons/parallel/Registry 16.25
43 TestAddons/parallel/Ingress 21.19
44 TestAddons/parallel/InspektorGadget 11.92
45 TestAddons/parallel/MetricsServer 7.03
48 TestAddons/parallel/CSI 63.75
49 TestAddons/parallel/Headlamp 16.92
50 TestAddons/parallel/CloudSpanner 6.95
51 TestAddons/parallel/LocalPath 52.17
52 TestAddons/parallel/NvidiaDevicePlugin 6.65
53 TestAddons/parallel/Yakd 11.89
54 TestAddons/StoppedEnableDisable 12.25
55 TestCertOptions 36.29
56 TestCertExpiration 229.45
58 TestForceSystemdFlag 44.53
59 TestForceSystemdEnv 44.47
60 TestDockerEnvContainerd 46.42
65 TestErrorSpam/setup 30.94
66 TestErrorSpam/start 0.72
67 TestErrorSpam/status 0.99
68 TestErrorSpam/pause 1.69
69 TestErrorSpam/unpause 1.72
70 TestErrorSpam/stop 1.41
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 68.96
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 6.19
77 TestFunctional/serial/KubeContext 0.06
78 TestFunctional/serial/KubectlGetPods 0.09
81 TestFunctional/serial/CacheCmd/cache/add_remote 4.91
82 TestFunctional/serial/CacheCmd/cache/add_local 1.49
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
84 TestFunctional/serial/CacheCmd/cache/list 0.06
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
86 TestFunctional/serial/CacheCmd/cache/cache_reload 2.18
87 TestFunctional/serial/CacheCmd/cache/delete 0.11
88 TestFunctional/serial/MinikubeKubectlCmd 0.15
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
90 TestFunctional/serial/ExtraConfig 42.62
91 TestFunctional/serial/ComponentHealth 0.11
92 TestFunctional/serial/LogsCmd 1.77
93 TestFunctional/serial/LogsFileCmd 1.73
94 TestFunctional/serial/InvalidService 4.97
96 TestFunctional/parallel/ConfigCmd 0.44
97 TestFunctional/parallel/DashboardCmd 8.82
98 TestFunctional/parallel/DryRun 0.52
99 TestFunctional/parallel/InternationalLanguage 0.24
100 TestFunctional/parallel/StatusCmd 1.07
104 TestFunctional/parallel/ServiceCmdConnect 13.62
105 TestFunctional/parallel/AddonsCmd 0.19
106 TestFunctional/parallel/PersistentVolumeClaim 26.21
108 TestFunctional/parallel/SSHCmd 0.67
109 TestFunctional/parallel/CpCmd 2.31
111 TestFunctional/parallel/FileSync 0.41
112 TestFunctional/parallel/CertSync 2.12
116 TestFunctional/parallel/NodeLabels 0.11
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.86
120 TestFunctional/parallel/License 0.42
121 TestFunctional/parallel/Version/short 0.07
122 TestFunctional/parallel/Version/components 1.23
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.29
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
127 TestFunctional/parallel/ImageCommands/ImageBuild 3.16
128 TestFunctional/parallel/ImageCommands/Setup 0.74
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
132 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.66
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.32
134 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.61
136 TestFunctional/parallel/ProfileCmd/profile_list 0.45
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.77
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.8
142 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.51
145 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.71
146 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.85
147 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
148 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
152 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
153 TestFunctional/parallel/ServiceCmd/DeployApp 6.27
154 TestFunctional/parallel/ServiceCmd/List 0.67
155 TestFunctional/parallel/MountCmd/any-port 7.3
156 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
158 TestFunctional/parallel/ServiceCmd/Format 0.57
159 TestFunctional/parallel/ServiceCmd/URL 0.4
160 TestFunctional/parallel/MountCmd/specific-port 2.41
161 TestFunctional/parallel/MountCmd/VerifyCleanup 2.04
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.01
168 TestMultiControlPlane/serial/StartCluster 123.57
169 TestMultiControlPlane/serial/DeployApp 35.65
170 TestMultiControlPlane/serial/PingHostFromPods 1.61
171 TestMultiControlPlane/serial/AddWorkerNode 24.3
172 TestMultiControlPlane/serial/NodeLabels 0.12
173 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
174 TestMultiControlPlane/serial/CopyFile 19.33
175 TestMultiControlPlane/serial/StopSecondaryNode 12.95
176 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
177 TestMultiControlPlane/serial/RestartSecondaryNode 18.69
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.74
179 TestMultiControlPlane/serial/RestartClusterKeepsNodes 151.38
180 TestMultiControlPlane/serial/DeleteSecondaryNode 11.29
181 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.54
182 TestMultiControlPlane/serial/StopCluster 36
183 TestMultiControlPlane/serial/RestartCluster 78.6
184 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
185 TestMultiControlPlane/serial/AddSecondaryNode 46.81
186 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
190 TestJSONOutput/start/Command 62.6
191 TestJSONOutput/start/Audit 0
193 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/pause/Command 0.73
197 TestJSONOutput/pause/Audit 0
199 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/unpause/Command 0.65
203 TestJSONOutput/unpause/Audit 0
205 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
208 TestJSONOutput/stop/Command 5.77
209 TestJSONOutput/stop/Audit 0
211 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
212 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
213 TestErrorJSONOutput 0.21
215 TestKicCustomNetwork/create_custom_network 38.31
216 TestKicCustomNetwork/use_default_bridge_network 33.09
217 TestKicExistingNetwork 37.14
218 TestKicCustomSubnet 34.98
219 TestKicStaticIP 36.42
220 TestMainNoArgs 0.05
221 TestMinikubeProfile 69.71
224 TestMountStart/serial/StartWithMountFirst 8.53
225 TestMountStart/serial/VerifyMountFirst 0.26
226 TestMountStart/serial/StartWithMountSecond 6.57
227 TestMountStart/serial/VerifyMountSecond 0.26
228 TestMountStart/serial/DeleteFirst 1.58
229 TestMountStart/serial/VerifyMountPostDelete 0.26
230 TestMountStart/serial/Stop 1.22
231 TestMountStart/serial/RestartStopped 7.29
232 TestMountStart/serial/VerifyMountPostStop 0.26
235 TestMultiNode/serial/FreshStart2Nodes 75.2
236 TestMultiNode/serial/DeployApp2Nodes 18.17
237 TestMultiNode/serial/PingHostFrom2Pods 1.03
238 TestMultiNode/serial/AddNode 15.48
239 TestMultiNode/serial/MultiNodeLabels 0.09
240 TestMultiNode/serial/ProfileList 0.35
241 TestMultiNode/serial/CopyFile 9.9
242 TestMultiNode/serial/StopNode 2.23
243 TestMultiNode/serial/StartAfterStop 9.33
244 TestMultiNode/serial/RestartKeepsNodes 86.73
245 TestMultiNode/serial/DeleteNode 5.87
246 TestMultiNode/serial/StopMultiNode 23.98
247 TestMultiNode/serial/RestartMultiNode 51.04
248 TestMultiNode/serial/ValidateNameConflict 32.8
253 TestPreload 108.05
255 TestScheduledStopUnix 110.71
258 TestInsufficientStorage 11.02
259 TestRunningBinaryUpgrade 94.79
261 TestKubernetesUpgrade 362.34
262 TestMissingContainerUpgrade 154.92
264 TestPause/serial/Start 72.68
266 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
267 TestNoKubernetes/serial/StartWithK8s 41.22
268 TestNoKubernetes/serial/StartWithStopK8s 7.15
269 TestNoKubernetes/serial/Start 8.87
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
271 TestNoKubernetes/serial/ProfileList 0.99
272 TestNoKubernetes/serial/Stop 1.33
273 TestNoKubernetes/serial/StartNoArgs 6.55
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
275 TestPause/serial/SecondStartNoReconfiguration 7.55
276 TestPause/serial/Pause 1.32
277 TestPause/serial/VerifyStatus 0.42
278 TestPause/serial/Unpause 0.91
279 TestPause/serial/PauseAgain 1.47
280 TestPause/serial/DeletePaused 3.22
281 TestPause/serial/VerifyDeletedResources 0.16
282 TestStoppedBinaryUpgrade/Setup 1.06
283 TestStoppedBinaryUpgrade/Upgrade 106.12
284 TestStoppedBinaryUpgrade/MinikubeLogs 1.02
299 TestNetworkPlugins/group/false 4.9
304 TestStartStop/group/old-k8s-version/serial/FirstStart 164.19
306 TestStartStop/group/no-preload/serial/FirstStart 76.81
307 TestStartStop/group/old-k8s-version/serial/DeployApp 8.9
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.24
309 TestStartStop/group/old-k8s-version/serial/Stop 12.65
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/no-preload/serial/DeployApp 8.42
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.22
314 TestStartStop/group/no-preload/serial/Stop 12.26
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
316 TestStartStop/group/no-preload/serial/SecondStart 266.71
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
320 TestStartStop/group/no-preload/serial/Pause 3.32
322 TestStartStop/group/embed-certs/serial/FirstStart 63.1
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
325 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.28
326 TestStartStop/group/old-k8s-version/serial/Pause 4.25
328 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 72.01
329 TestStartStop/group/embed-certs/serial/DeployApp 9.46
330 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.3
331 TestStartStop/group/embed-certs/serial/Stop 12.35
332 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
333 TestStartStop/group/embed-certs/serial/SecondStart 291.8
334 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.48
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.17
336 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.1
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
338 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.9
339 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
340 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
341 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
342 TestStartStop/group/embed-certs/serial/Pause 3.12
344 TestStartStop/group/newest-cni/serial/FirstStart 39.39
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.92
349 TestNetworkPlugins/group/auto/Start 75.82
350 TestStartStop/group/newest-cni/serial/DeployApp 0
351 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.91
352 TestStartStop/group/newest-cni/serial/Stop 3.64
353 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.27
354 TestStartStop/group/newest-cni/serial/SecondStart 22.52
355 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
356 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
358 TestStartStop/group/newest-cni/serial/Pause 4.38
359 TestNetworkPlugins/group/kindnet/Start 69.67
360 TestNetworkPlugins/group/auto/KubeletFlags 0.38
361 TestNetworkPlugins/group/auto/NetCatPod 10.34
362 TestNetworkPlugins/group/auto/DNS 0.18
363 TestNetworkPlugins/group/auto/Localhost 0.19
364 TestNetworkPlugins/group/auto/HairPin 0.16
365 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
366 TestNetworkPlugins/group/calico/Start 76.12
367 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
368 TestNetworkPlugins/group/kindnet/NetCatPod 12.48
369 TestNetworkPlugins/group/kindnet/DNS 0.22
370 TestNetworkPlugins/group/kindnet/Localhost 0.22
371 TestNetworkPlugins/group/kindnet/HairPin 0.22
372 TestNetworkPlugins/group/custom-flannel/Start 67.82
373 TestNetworkPlugins/group/calico/ControllerPod 6.01
374 TestNetworkPlugins/group/calico/KubeletFlags 0.39
375 TestNetworkPlugins/group/calico/NetCatPod 10.3
376 TestNetworkPlugins/group/calico/DNS 0.35
377 TestNetworkPlugins/group/calico/Localhost 0.24
378 TestNetworkPlugins/group/calico/HairPin 0.21
379 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
380 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.37
381 TestNetworkPlugins/group/enable-default-cni/Start 59.24
382 TestNetworkPlugins/group/custom-flannel/DNS 0.27
383 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
384 TestNetworkPlugins/group/custom-flannel/HairPin 0.19
385 TestNetworkPlugins/group/flannel/Start 66.37
386 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.43
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.44
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
391 TestNetworkPlugins/group/bridge/Start 50.03
392 TestNetworkPlugins/group/flannel/ControllerPod 6.01
393 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
394 TestNetworkPlugins/group/flannel/NetCatPod 10.33
395 TestNetworkPlugins/group/flannel/DNS 0.25
396 TestNetworkPlugins/group/flannel/Localhost 0.22
397 TestNetworkPlugins/group/flannel/HairPin 0.17
398 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
399 TestNetworkPlugins/group/bridge/NetCatPod 9.26
400 TestNetworkPlugins/group/bridge/DNS 33.46
401 TestNetworkPlugins/group/bridge/Localhost 0.18
402 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-425957 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-425957 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.218552584s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-425957
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-425957: exit status 85 (72.737544ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-425957 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |          |
	|         | -p download-only-425957        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:23:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:23:22.318672 2909794 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:23:22.318839 2909794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:23:22.318851 2909794 out.go:304] Setting ErrFile to fd 2...
	I0729 10:23:22.318858 2909794 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:23:22.319089 2909794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	W0729 10:23:22.319225 2909794 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19337-2904404/.minikube/config/config.json: open /home/jenkins/minikube-integration/19337-2904404/.minikube/config/config.json: no such file or directory
	I0729 10:23:22.319647 2909794 out.go:298] Setting JSON to true
	I0729 10:23:22.320756 2909794 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":65153,"bootTime":1722183450,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 10:23:22.320834 2909794 start.go:139] virtualization:  
	I0729 10:23:22.323758 2909794 out.go:97] [download-only-425957] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	W0729 10:23:22.323975 2909794 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball: no such file or directory
	I0729 10:23:22.324050 2909794 notify.go:220] Checking for updates...
	I0729 10:23:22.326240 2909794 out.go:169] MINIKUBE_LOCATION=19337
	I0729 10:23:22.328645 2909794 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:23:22.330478 2909794 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 10:23:22.332271 2909794 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 10:23:22.334197 2909794 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0729 10:23:22.337655 2909794 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:23:22.337975 2909794 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:23:22.362038 2909794 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 10:23:22.362146 2909794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:23:22.430128 2909794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-29 10:23:22.420888983 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:23:22.430240 2909794 docker.go:307] overlay module found
	I0729 10:23:22.431900 2909794 out.go:97] Using the docker driver based on user configuration
	I0729 10:23:22.431927 2909794 start.go:297] selected driver: docker
	I0729 10:23:22.431935 2909794 start.go:901] validating driver "docker" against <nil>
	I0729 10:23:22.432046 2909794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:23:22.485419 2909794 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:51 SystemTime:2024-07-29 10:23:22.476724425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:23:22.485597 2909794 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:23:22.485896 2909794 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0729 10:23:22.486050 2909794 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:23:22.488118 2909794 out.go:169] Using Docker driver with root privileges
	I0729 10:23:22.489814 2909794 cni.go:84] Creating CNI manager for ""
	I0729 10:23:22.489841 2909794 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 10:23:22.489854 2909794 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:23:22.489955 2909794 start.go:340] cluster config:
	{Name:download-only-425957 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-425957 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:23:22.491942 2909794 out.go:97] Starting "download-only-425957" primary control-plane node in "download-only-425957" cluster
	I0729 10:23:22.491973 2909794 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0729 10:23:22.493499 2909794 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 10:23:22.493523 2909794 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0729 10:23:22.493683 2909794 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 10:23:22.509236 2909794 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:23:22.509748 2909794 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 10:23:22.509850 2909794 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:23:22.570155 2909794 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0729 10:23:22.570197 2909794 cache.go:56] Caching tarball of preloaded images
	I0729 10:23:22.570770 2909794 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0729 10:23:22.573665 2909794 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0729 10:23:22.573684 2909794 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0729 10:23:22.673456 2909794 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-425957 host does not exist
	  To start a cluster, run: "minikube start -p download-only-425957"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-425957
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-735175 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-735175 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (7.368019509s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-735175
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-735175: exit status 85 (71.813528ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-425957 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | -p download-only-425957        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-425957        | download-only-425957 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| start   | -o=json --download-only        | download-only-735175 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | -p download-only-735175        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:23:31
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:23:31.945029 2909999 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:23:31.945219 2909999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:23:31.945249 2909999 out.go:304] Setting ErrFile to fd 2...
	I0729 10:23:31.945270 2909999 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:23:31.945536 2909999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:23:31.945970 2909999 out.go:298] Setting JSON to true
	I0729 10:23:31.947083 2909999 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":65162,"bootTime":1722183450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 10:23:31.947228 2909999 start.go:139] virtualization:  
	I0729 10:23:31.950184 2909999 out.go:97] [download-only-735175] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0729 10:23:31.950465 2909999 notify.go:220] Checking for updates...
	I0729 10:23:31.952412 2909999 out.go:169] MINIKUBE_LOCATION=19337
	I0729 10:23:31.954608 2909999 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:23:31.956984 2909999 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 10:23:31.959192 2909999 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 10:23:31.961430 2909999 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0729 10:23:31.965371 2909999 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:23:31.965644 2909999 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:23:31.986664 2909999 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 10:23:31.986756 2909999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:23:32.052055 2909999 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-29 10:23:32.041840229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:23:32.052164 2909999 docker.go:307] overlay module found
	I0729 10:23:32.054203 2909999 out.go:97] Using the docker driver based on user configuration
	I0729 10:23:32.054231 2909999 start.go:297] selected driver: docker
	I0729 10:23:32.054238 2909999 start.go:901] validating driver "docker" against <nil>
	I0729 10:23:32.054345 2909999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:23:32.109041 2909999 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-07-29 10:23:32.099639717 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:23:32.109225 2909999 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:23:32.109525 2909999 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0729 10:23:32.109680 2909999 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:23:32.111655 2909999 out.go:169] Using Docker driver with root privileges
	I0729 10:23:32.113702 2909999 cni.go:84] Creating CNI manager for ""
	I0729 10:23:32.113726 2909999 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 10:23:32.113739 2909999 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:23:32.113830 2909999 start.go:340] cluster config:
	{Name:download-only-735175 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-735175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:23:32.115697 2909999 out.go:97] Starting "download-only-735175" primary control-plane node in "download-only-735175" cluster
	I0729 10:23:32.115717 2909999 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0729 10:23:32.117861 2909999 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 10:23:32.117900 2909999 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0729 10:23:32.118075 2909999 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 10:23:32.132963 2909999 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:23:32.133086 2909999 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 10:23:32.133104 2909999 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 10:23:32.133109 2909999 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 10:23:32.133116 2909999 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 10:23:32.174686 2909999 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	I0729 10:23:32.174715 2909999 cache.go:56] Caching tarball of preloaded images
	I0729 10:23:32.174882 2909999 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime containerd
	I0729 10:23:32.177212 2909999 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0729 10:23:32.177234 2909999 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4 ...
	I0729 10:23:32.273770 2909999 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4?checksum=md5:2969442dcdf6412905c6484ccc8dd1ed -> /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-735175 host does not exist
	  To start a cluster, run: "minikube start -p download-only-735175"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-735175
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (8.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-012491 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-012491 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.023419979s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (8.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-012491
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-012491: exit status 85 (255.158175ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-425957 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | -p download-only-425957             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-425957             | download-only-425957 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| start   | -o=json --download-only             | download-only-735175 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | -p download-only-735175             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| delete  | -p download-only-735175             | download-only-735175 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC | 29 Jul 24 10:23 UTC |
	| start   | -o=json --download-only             | download-only-012491 | jenkins | v1.33.1 | 29 Jul 24 10:23 UTC |                     |
	|         | -p download-only-012491             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|         | --driver=docker                     |                      |         |         |                     |                     |
	|         | --container-runtime=containerd      |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/29 10:23:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0729 10:23:39.743448 2910204 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:23:39.743589 2910204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:23:39.743598 2910204 out.go:304] Setting ErrFile to fd 2...
	I0729 10:23:39.743604 2910204 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:23:39.743856 2910204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:23:39.744246 2910204 out.go:298] Setting JSON to true
	I0729 10:23:39.745148 2910204 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":65170,"bootTime":1722183450,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 10:23:39.745220 2910204 start.go:139] virtualization:  
	I0729 10:23:39.747851 2910204 out.go:97] [download-only-012491] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0729 10:23:39.748071 2910204 notify.go:220] Checking for updates...
	I0729 10:23:39.750057 2910204 out.go:169] MINIKUBE_LOCATION=19337
	I0729 10:23:39.752023 2910204 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:23:39.754317 2910204 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 10:23:39.756721 2910204 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 10:23:39.759004 2910204 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0729 10:23:39.763401 2910204 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0729 10:23:39.763671 2910204 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:23:39.784803 2910204 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 10:23:39.784917 2910204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:23:39.848105 2910204 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-29 10:23:39.838418443 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:23:39.848237 2910204 docker.go:307] overlay module found
	I0729 10:23:39.850142 2910204 out.go:97] Using the docker driver based on user configuration
	I0729 10:23:39.850174 2910204 start.go:297] selected driver: docker
	I0729 10:23:39.850182 2910204 start.go:901] validating driver "docker" against <nil>
	I0729 10:23:39.850298 2910204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:23:39.905168 2910204 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-07-29 10:23:39.895980606 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:23:39.905339 2910204 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0729 10:23:39.905633 2910204 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0729 10:23:39.905796 2910204 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0729 10:23:39.908266 2910204 out.go:169] Using Docker driver with root privileges
	I0729 10:23:39.910144 2910204 cni.go:84] Creating CNI manager for ""
	I0729 10:23:39.910169 2910204 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0729 10:23:39.910182 2910204 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0729 10:23:39.910280 2910204 start.go:340] cluster config:
	{Name:download-only-012491 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-012491 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval
:1m0s}
	I0729 10:23:39.912537 2910204 out.go:97] Starting "download-only-012491" primary control-plane node in "download-only-012491" cluster
	I0729 10:23:39.912563 2910204 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0729 10:23:39.915111 2910204 out.go:97] Pulling base image v0.0.44-1721902582-19326 ...
	I0729 10:23:39.915139 2910204 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0729 10:23:39.915242 2910204 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local docker daemon
	I0729 10:23:39.930683 2910204 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 to local cache
	I0729 10:23:39.930821 2910204 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory
	I0729 10:23:39.930843 2910204 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 in local cache directory, skipping pull
	I0729 10:23:39.930848 2910204 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 exists in cache, skipping pull
	I0729 10:23:39.930858 2910204 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 as a tarball
	I0729 10:23:39.980510 2910204 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4
	I0729 10:23:39.980543 2910204 cache.go:56] Caching tarball of preloaded images
	I0729 10:23:39.981119 2910204 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime containerd
	I0729 10:23:39.983282 2910204 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0729 10:23:39.983312 2910204 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4 ...
	I0729 10:23:40.119057 2910204 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:e1550e32e6115d92010b4a739f5f0833 -> /home/jenkins/minikube-integration/19337-2904404/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-012491 host does not exist
	  To start a cluster, run: "minikube start -p download-only-012491"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-012491
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.54s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-323309 --alsologtostderr --binary-mirror http://127.0.0.1:36657 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-323309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-323309
--- PASS: TestBinaryMirror (0.54s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-299185
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-299185: exit status 85 (65.565635ms)

                                                
                                                
-- stdout --
	* Profile "addons-299185" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-299185"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-299185
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-299185: exit status 85 (63.261332ms)

                                                
                                                
-- stdout --
	* Profile "addons-299185" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-299185"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (160.38s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-299185 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-299185 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (2m40.374718319s)
--- PASS: TestAddons/Setup (160.38s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-299185 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-299185 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.795517ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-4z48h" [cab9244c-6d04-49d8-a796-1f4e4c1c4a12] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.007390033s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-72wcs" [54696946-5a63-4251-be62-cb68e1b927df] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004265595s
addons_test.go:342: (dbg) Run:  kubectl --context addons-299185 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-299185 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-299185 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.265345966s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 ip
2024/07/29 10:30:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.25s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (21.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-299185 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-299185 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-299185 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [21bc425f-54fd-47e7-b86a-444e139a139a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [21bc425f-54fd-47e7-b86a-444e139a139a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003815187s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-299185 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-299185 addons disable ingress-dns --alsologtostderr -v=1: (2.224328939s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-299185 addons disable ingress --alsologtostderr -v=1: (7.85661599s)
--- PASS: TestAddons/parallel/Ingress (21.19s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-hnltc" [bd63f80e-7f09-4013-9880-fec9404b8fdb] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003637342s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-299185
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-299185: (5.911700627s)
--- PASS: TestAddons/parallel/InspektorGadget (11.92s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.936994ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-wtmps" [e34b40fc-8809-456b-9af1-ceb94b883425] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004260449s
addons_test.go:417: (dbg) Run:  kubectl --context addons-299185 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.75s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.7601ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-299185 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-299185 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d6241684-5c36-40e0-b93d-30375c3627e1] Pending
helpers_test.go:344: "task-pv-pod" [d6241684-5c36-40e0-b93d-30375c3627e1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d6241684-5c36-40e0-b93d-30375c3627e1] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003429493s
addons_test.go:590: (dbg) Run:  kubectl --context addons-299185 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-299185 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-299185 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-299185 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-299185 delete pod task-pv-pod: (1.453529494s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-299185 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-299185 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-299185 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [904c2287-8a33-4e64-b3f7-29c643c12185] Pending
helpers_test.go:344: "task-pv-pod-restore" [904c2287-8a33-4e64-b3f7-29c643c12185] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [904c2287-8a33-4e64-b3f7-29c643c12185] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.013701649s
addons_test.go:632: (dbg) Run:  kubectl --context addons-299185 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-299185 delete pod task-pv-pod-restore: (1.395257104s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-299185 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-299185 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-299185 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.01901224s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.75s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-299185 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-299185 --alsologtostderr -v=1: (1.045482548s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-zncgf" [77ac6e8e-24b9-4470-8110-4f51ddb5cf4f] Pending
helpers_test.go:344: "headlamp-7867546754-zncgf" [77ac6e8e-24b9-4470-8110-4f51ddb5cf4f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-zncgf" [77ac6e8e-24b9-4470-8110-4f51ddb5cf4f] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-zncgf" [77ac6e8e-24b9-4470-8110-4f51ddb5cf4f] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003804566s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-299185 addons disable headlamp --alsologtostderr -v=1: (5.871492735s)
--- PASS: TestAddons/parallel/Headlamp (16.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-hbmgp" [34ab4c97-e18f-46d9-94b7-0aa4fcb9b741] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.005586131s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-299185
--- PASS: TestAddons/parallel/CloudSpanner (6.95s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-299185 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-299185 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-299185 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c58595d4-0a08-4d9d-a375-3fbbe4dbecc1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c58595d4-0a08-4d9d-a375-3fbbe4dbecc1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c58595d4-0a08-4d9d-a375-3fbbe4dbecc1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003236568s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-299185 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 ssh "cat /opt/local-path-provisioner/pvc-a500ff3b-7759-4a57-861f-5b8a63fc23ca_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-299185 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-299185 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-299185 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.907698156s)
--- PASS: TestAddons/parallel/LocalPath (52.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-djkh5" [8b50190f-ddbf-4864-928b-7b96c73d1e81] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004723008s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-299185
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-p24kq" [9e915f4a-2b69-4c7c-9a40-4b184f450cdf] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004070793s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-299185 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-299185 addons disable yakd --alsologtostderr -v=1: (5.886138038s)
--- PASS: TestAddons/parallel/Yakd (11.89s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.25s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-299185
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-299185: (11.984389269s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-299185
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-299185
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-299185
--- PASS: TestAddons/StoppedEnableDisable (12.25s)

                                                
                                    
x
+
TestCertOptions (36.29s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-873297 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-873297 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.596214191s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-873297 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-873297 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-873297 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-873297" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-873297
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-873297: (1.993639556s)
--- PASS: TestCertOptions (36.29s)

                                                
                                    
x
+
TestCertExpiration (229.45s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-262221 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-262221 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.87518587s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-262221 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-262221 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.147733898s)
helpers_test.go:175: Cleaning up "cert-expiration-262221" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-262221
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-262221: (2.426629324s)
--- PASS: TestCertExpiration (229.45s)

                                                
                                    
x
+
TestForceSystemdFlag (44.53s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-442512 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-442512 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.517442107s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-442512 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-442512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-442512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-442512: (2.67679783s)
--- PASS: TestForceSystemdFlag (44.53s)

                                                
                                    
x
+
TestForceSystemdEnv (44.47s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-680883 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0729 11:09:33.757640 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-680883 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.030729505s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-680883 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-680883" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-680883
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-680883: (2.105275838s)
--- PASS: TestForceSystemdEnv (44.47s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.42s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-544034 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-544034 --driver=docker  --container-runtime=containerd: (30.290692232s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-544034"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-544034": (1.272680613s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-SkGXmf6zOhMx/agent.2928894" SSH_AGENT_PID="2928895" DOCKER_HOST=ssh://docker@127.0.0.1:36474 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-SkGXmf6zOhMx/agent.2928894" SSH_AGENT_PID="2928895" DOCKER_HOST=ssh://docker@127.0.0.1:36474 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-SkGXmf6zOhMx/agent.2928894" SSH_AGENT_PID="2928895" DOCKER_HOST=ssh://docker@127.0.0.1:36474 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.387558043s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-SkGXmf6zOhMx/agent.2928894" SSH_AGENT_PID="2928895" DOCKER_HOST=ssh://docker@127.0.0.1:36474 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-544034" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-544034
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-544034: (1.991216219s)
--- PASS: TestDockerEnvContainerd (46.42s)

                                                
                                    
x
+
TestErrorSpam/setup (30.94s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-817544 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-817544 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-817544 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-817544 --driver=docker  --container-runtime=containerd: (30.943562437s)
--- PASS: TestErrorSpam/setup (30.94s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 pause
--- PASS: TestErrorSpam/pause (1.69s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (1.41s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 stop: (1.216664691s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-817544 --log_dir /tmp/nospam-817544 stop
--- PASS: TestErrorSpam/stop (1.41s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/19337-2904404/.minikube/files/etc/test/nested/copy/2909789/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (68.96s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788372 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2230: (dbg) Done: out/minikube-linux-arm64 start -p functional-788372 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m8.959239619s)
--- PASS: TestFunctional/serial/StartWithProxy (68.96s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788372 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-arm64 start -p functional-788372 --alsologtostderr -v=8: (6.187078748s)
functional_test.go:659: soft start took 6.190999364s for "functional-788372" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-788372 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 cache add registry.k8s.io/pause:3.1: (1.962264465s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 cache add registry.k8s.io/pause:3.3: (1.570022542s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 cache add registry.k8s.io/pause:latest: (1.379382227s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-788372 /tmp/TestFunctionalserialCacheCmdcacheadd_local3542392767/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cache add minikube-local-cache-test:functional-788372
functional_test.go:1090: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cache delete minikube-local-cache-test:functional-788372
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-788372
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (317.100427ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 cache reload: (1.250661364s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 kubectl -- --context functional-788372 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-788372 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.62s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788372 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-arm64 start -p functional-788372 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.617501043s)
functional_test.go:757: restart took 42.617617794s for "functional-788372" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.62s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-788372 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 logs: (1.770564394s)
--- PASS: TestFunctional/serial/LogsCmd (1.77s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 logs --file /tmp/TestFunctionalserialLogsFileCmd2838952809/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 logs --file /tmp/TestFunctionalserialLogsFileCmd2838952809/001/logs.txt: (1.728481475s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.73s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-788372 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-788372
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-788372: exit status 115 (632.642577ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32742 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-788372 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-788372 delete -f testdata/invalidsvc.yaml: (1.080054427s)
--- PASS: TestFunctional/serial/InvalidService (4.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 config get cpus: exit status 14 (81.890016ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 config get cpus: exit status 14 (66.031239ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-788372 --alsologtostderr -v=1]
E0729 10:36:40.952154 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
functional_test.go:906: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-788372 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2945448: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.82s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788372 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
E0729 10:36:35.831147 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-788372 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (206.48868ms)

                                                
                                                
-- stdout --
	* [functional-788372] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:36:35.854408 2944129 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:36:35.854591 2944129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:35.854622 2944129 out.go:304] Setting ErrFile to fd 2...
	I0729 10:36:35.854640 2944129 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:35.854881 2944129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:36:35.855252 2944129 out.go:298] Setting JSON to false
	I0729 10:36:35.856349 2944129 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":65946,"bootTime":1722183450,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 10:36:35.856453 2944129 start.go:139] virtualization:  
	I0729 10:36:35.859234 2944129 out.go:177] * [functional-788372] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0729 10:36:35.861267 2944129 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:36:35.861414 2944129 notify.go:220] Checking for updates...
	I0729 10:36:35.864655 2944129 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:36:35.866510 2944129 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 10:36:35.868282 2944129 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 10:36:35.871106 2944129 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0729 10:36:35.874597 2944129 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:36:35.877181 2944129 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:36:35.878204 2944129 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:36:35.913586 2944129 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 10:36:35.913703 2944129 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:36:35.981537 2944129 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-29 10:36:35.970617278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:36:35.981657 2944129 docker.go:307] overlay module found
	I0729 10:36:35.995190 2944129 out.go:177] * Using the docker driver based on existing profile
	I0729 10:36:35.997464 2944129 start.go:297] selected driver: docker
	I0729 10:36:35.997484 2944129 start.go:901] validating driver "docker" against &{Name:functional-788372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-788372 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:35.997599 2944129 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:36:36.000570 2944129 out.go:177] 
	W0729 10:36:36.007489 2944129 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0729 10:36:36.011528 2944129 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788372 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-arm64 start -p functional-788372 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-788372 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (236.102881ms)

                                                
                                                
-- stdout --
	* [functional-788372] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:36:39.936366 2945166 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:36:39.937127 2945166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:39.937145 2945166 out.go:304] Setting ErrFile to fd 2...
	I0729 10:36:39.937151 2945166 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:36:39.938163 2945166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:36:39.938681 2945166 out.go:298] Setting JSON to false
	I0729 10:36:39.939992 2945166 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":65950,"bootTime":1722183450,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 10:36:39.940113 2945166 start.go:139] virtualization:  
	I0729 10:36:39.943633 2945166 out.go:177] * [functional-788372] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0729 10:36:39.945435 2945166 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 10:36:39.945612 2945166 notify.go:220] Checking for updates...
	I0729 10:36:39.948981 2945166 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 10:36:39.950813 2945166 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 10:36:39.953030 2945166 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 10:36:39.955112 2945166 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0729 10:36:39.957051 2945166 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 10:36:39.959487 2945166 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:36:39.960113 2945166 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 10:36:39.989930 2945166 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 10:36:39.990054 2945166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:36:40.091355 2945166 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-07-29 10:36:40.07920697 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:36:40.091673 2945166 docker.go:307] overlay module found
	I0729 10:36:40.094837 2945166 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0729 10:36:40.097247 2945166 start.go:297] selected driver: docker
	I0729 10:36:40.097271 2945166 start.go:901] validating driver "docker" against &{Name:functional-788372 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721902582-19326@sha256:540fb5dc7f38be17ff5276a38dfe6c8a4b1d9ba1c27c62244e6eebd7e37696e7 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:functional-788372 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0729 10:36:40.097397 2945166 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 10:36:40.101163 2945166 out.go:177] 
	W0729 10:36:40.103382 2945166 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0729 10:36:40.105315 2945166 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (13.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1623: (dbg) Run:  kubectl --context functional-788372 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-788372 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-6f49f58cd5-dq2cg" [912b7063-f032-4026-afe6-e8a589aa96b8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-6f49f58cd5-dq2cg" [912b7063-f032-4026-afe6-e8a589aa96b8] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 13.003204463s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:30396
functional_test.go:1671: http://192.168.49.2:30396: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-6f49f58cd5-dq2cg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30396
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (13.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f3841735-b657-43ef-9bba-1e8ed9189bff] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.00443711s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-788372 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-788372 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-788372 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-788372 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8388b2cc-fbda-43b4-bb64-8fa627d1109c] Pending
helpers_test.go:344: "sp-pod" [8388b2cc-fbda-43b4-bb64-8fa627d1109c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8388b2cc-fbda-43b4-bb64-8fa627d1109c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003981126s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-788372 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-788372 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-788372 delete -f testdata/storage-provisioner/pod.yaml: (1.146344687s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-788372 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [189feb8d-3e3d-4faf-8fcd-4184f9d17a58] Pending
helpers_test.go:344: "sp-pod" [189feb8d-3e3d-4faf-8fcd-4184f9d17a58] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004165798s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-788372 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.21s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh -n functional-788372 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cp functional-788372:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd224864996/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh -n functional-788372 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh -n functional-788372 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2909789/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo cat /etc/test/nested/copy/2909789/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2909789.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo cat /etc/ssl/certs/2909789.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2909789.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo cat /usr/share/ca-certificates/2909789.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/29097892.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo cat /etc/ssl/certs/29097892.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/29097892.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo cat /usr/share/ca-certificates/29097892.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-788372 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 ssh "sudo systemctl is-active docker": exit status 1 (480.364384ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 ssh "sudo systemctl is-active crio": exit status 1 (374.916332ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 version -o=json --components
2024/07/29 10:36:48 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2266: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 version -o=json --components: (1.233375245s)
--- PASS: TestFunctional/parallel/Version/components (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-788372 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-788372
docker.io/kindest/kindnetd:v20240719-e7903573
docker.io/kindest/kindnetd:v20240715-585640e9
docker.io/kicbase/echo-server:functional-788372
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788372 image ls --format short --alsologtostderr:
I0729 10:36:49.454102 2946794 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:49.454231 2946794 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:49.454241 2946794 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:49.454247 2946794 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:49.454488 2946794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
I0729 10:36:49.455286 2946794 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:49.455437 2946794 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:49.456209 2946794 cli_runner.go:164] Run: docker container inspect functional-788372 --format={{.State.Status}}
I0729 10:36:49.482064 2946794 ssh_runner.go:195] Run: systemctl --version
I0729 10:36:49.482126 2946794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788372
I0729 10:36:49.518320 2946794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36484 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/functional-788372/id_rsa Username:docker}
I0729 10:36:49.616455 2946794 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-788372 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-proxy                  | v1.30.3            | sha256:2351f5 | 25.6MB |
| registry.k8s.io/pause                       | 3.9                | sha256:829e9d | 268kB  |
| docker.io/library/nginx                     | alpine             | sha256:d7cd33 | 18.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-controller-manager     | v1.30.3            | sha256:8e97cd | 28.4MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/kindest/kindnetd                  | v20240719-e7903573 | sha256:f42786 | 33.3MB |
| docker.io/library/nginx                     | latest             | sha256:43b17f | 67.6MB |
| registry.k8s.io/etcd                        | 3.5.12-0           | sha256:014faa | 66.2MB |
| registry.k8s.io/kube-scheduler              | v1.30.3            | sha256:d48f99 | 17.6MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kicbase/echo-server               | functional-788372  | sha256:ce2d2c | 2.17MB |
| docker.io/kindest/kindnetd                  | v20240715-585640e9 | sha256:5e3296 | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-788372  | sha256:40a91c | 992B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.30.3            | sha256:617731 | 29.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788372 image ls --format table --alsologtostderr:
I0729 10:36:50.152043 2946963 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:50.152219 2946963 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:50.152226 2946963 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:50.152232 2946963 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:50.152561 2946963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
I0729 10:36:50.153280 2946963 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:50.153435 2946963 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:50.153981 2946963 cli_runner.go:164] Run: docker container inspect functional-788372 --format={{.State.Status}}
I0729 10:36:50.181591 2946963 ssh_runner.go:195] Run: systemctl --version
I0729 10:36:50.181650 2946963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788372
I0729 10:36:50.205589 2946963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36484 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/functional-788372/id_rsa Username:docker}
I0729 10:36:50.308298 2946963 ssh_runner.go:195] Run: sudo crictl images --output json
E0729 10:36:51.192963 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-788372 image ls --format json --alsologtostderr:
[{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"28374500"},{"id":"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"268051"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:5e32961ddcea3ade65511b2e27
f675bbda25305639279f8b708014019e8cebb2","repoDigests":["docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493"],"repoTags":["docker.io/kindest/kindnetd:v20240715-585640e9"],"size":"33290438"},{"id":"sha256:40a91c9beb93608523a9146f262364b8b3b7970d00e46b70856c26dd35a534b6","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-788372"],"size":"992"},{"id":"sha256:43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76","repoDigests":["docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c"],"repoTags":["docker.io/library/nginx:latest"],"size":"67647629"},{"id":"sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be","repoDigests":["registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65"],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"25645955"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b
25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800","repoDigests":["docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a"],"repoTags":["docker.io/kindest/kindnetd:v20240719-e7903573"],"size":"33296266"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660","repoDigests":["docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9"],"repoTags":["docker.io/library/nginx:alpine"],"size":"18253575"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","rep
oDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355","repoDigests":["registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4"],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"17641143"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-788372"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k
8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd","repoDigests":["registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b"],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"66189079"},{"id":"sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca","repoDigests":["registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c"],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"29942692"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788372 image ls --format json --alsologtostderr:
I0729 10:36:49.848148 2946876 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:49.848844 2946876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:49.848853 2946876 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:49.848858 2946876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:49.849234 2946876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
I0729 10:36:49.852308 2946876 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:49.852549 2946876 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:49.853076 2946876 cli_runner.go:164] Run: docker container inspect functional-788372 --format={{.State.Status}}
I0729 10:36:49.880323 2946876 ssh_runner.go:195] Run: systemctl --version
I0729 10:36:49.880396 2946876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788372
I0729 10:36:49.904623 2946876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36484 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/functional-788372/id_rsa Username:docker}
I0729 10:36:50.014957 2946876 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-arm64 -p functional-788372 image ls --format yaml --alsologtostderr:
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd
repoDigests:
- registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "66189079"
- id: sha256:2351f570ed0eac5533e538280d73c6aa5d6b6f6379f5f3fac08f51378621e6be
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b26e535e8ee1cbd7dc5642fb61bd36e9d23f32e9242ae0010b2905656e664f65
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "25645955"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:5e32961ddcea3ade65511b2e27f675bbda25305639279f8b708014019e8cebb2
repoDigests:
- docker.io/kindest/kindnetd@sha256:88ed2adbc140254762f98fad7f4b16d279117356ebaf95aebf191713c828a493
repoTags:
- docker.io/kindest/kindnetd:v20240715-585640e9
size: "33290438"
- id: sha256:f42786f8afd2214fc59fbf9a26531806f562488d4a7d7a31e8b5e9ff6289b800
repoDigests:
- docker.io/kindest/kindnetd@sha256:da8ad203ec15a72c313015e5609db44bfad7c95d8ce63e87ff97c66363b5680a
repoTags:
- docker.io/kindest/kindnetd:v20240719-e7903573
size: "33296266"
- id: sha256:43b17fe33c4b4cf8de762123d33e02f2ed0c5e1178002f533d4fb5df1e05fb76
repoDigests:
- docker.io/library/nginx@sha256:6af79ae5de407283dcea8b00d5c37ace95441fd58a8b1d2aa1ed93f5511bb18c
repoTags:
- docker.io/library/nginx:latest
size: "67647629"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:d48f992a22722fc0290769b8fab1186db239bbad4cff837fbb641c55faef9355
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:2147ab5d2c73dd84e28332fcbee6826d1648eed30a531a52a96501b37d7ee4e4
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "17641143"
- id: sha256:61773190d42ff0792f3bab2658e80b1c07519170955bb350b153b564ef28f4ca
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:a36d558835e48950f6d13b1edbe20605b8dfbc81e088f58221796631e107966c
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "29942692"
- id: sha256:8e97cdb19e7cc420af7c71de8b5c9ab536bd278758c8c0878c464b833d91b31a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:eff43da55a29a5e66ec9480f28233d733a6a8433b7a46f6e8c07086fa4ef69b7
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "28374500"
- id: sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
repoTags:
- registry.k8s.io/pause:3.9
size: "268051"
- id: sha256:40a91c9beb93608523a9146f262364b8b3b7970d00e46b70856c26dd35a534b6
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-788372
size: "992"
- id: sha256:d7cd33d7d4ed1cdef69594adc36fcc03a0aa45ba930d39a9286024d9b2322660
repoDigests:
- docker.io/library/nginx@sha256:208b70eefac13ee9be00e486f79c695b15cef861c680527171a27d253d834be9
repoTags:
- docker.io/library/nginx:alpine
size: "18253575"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-788372
size: "2173567"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788372 image ls --format yaml --alsologtostderr:
I0729 10:36:49.511362 2946805 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:49.511651 2946805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:49.511683 2946805 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:49.511704 2946805 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:49.512036 2946805 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
I0729 10:36:49.512747 2946805 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:49.512920 2946805 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:49.513407 2946805 cli_runner.go:164] Run: docker container inspect functional-788372 --format={{.State.Status}}
I0729 10:36:49.546778 2946805 ssh_runner.go:195] Run: systemctl --version
I0729 10:36:49.546956 2946805 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788372
I0729 10:36:49.572662 2946805 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36484 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/functional-788372/id_rsa Username:docker}
I0729 10:36:49.670462 2946805 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 ssh pgrep buildkitd: exit status 1 (362.718046ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image build -t localhost/my-image:functional-788372 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 image build -t localhost/my-image:functional-788372 testdata/build --alsologtostderr: (2.573262646s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-arm64 -p functional-788372 image build -t localhost/my-image:functional-788372 testdata/build --alsologtostderr:
I0729 10:36:50.099944 2946958 out.go:291] Setting OutFile to fd 1 ...
I0729 10:36:50.100940 2946958 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:50.100953 2946958 out.go:304] Setting ErrFile to fd 2...
I0729 10:36:50.100959 2946958 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0729 10:36:50.101268 2946958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
I0729 10:36:50.101996 2946958 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:50.103983 2946958 config.go:182] Loaded profile config "functional-788372": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
I0729 10:36:50.104539 2946958 cli_runner.go:164] Run: docker container inspect functional-788372 --format={{.State.Status}}
I0729 10:36:50.139237 2946958 ssh_runner.go:195] Run: systemctl --version
I0729 10:36:50.139287 2946958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-788372
I0729 10:36:50.170934 2946958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36484 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/functional-788372/id_rsa Username:docker}
I0729 10:36:50.280121 2946958 build_images.go:161] Building image from path: /tmp/build.1741060408.tar
I0729 10:36:50.280240 2946958 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0729 10:36:50.290268 2946958 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1741060408.tar
I0729 10:36:50.293651 2946958 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1741060408.tar: stat -c "%s %y" /var/lib/minikube/build/build.1741060408.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1741060408.tar': No such file or directory
I0729 10:36:50.293682 2946958 ssh_runner.go:362] scp /tmp/build.1741060408.tar --> /var/lib/minikube/build/build.1741060408.tar (3072 bytes)
I0729 10:36:50.330139 2946958 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1741060408
I0729 10:36:50.340199 2946958 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1741060408 -xf /var/lib/minikube/build/build.1741060408.tar
I0729 10:36:50.350194 2946958 containerd.go:394] Building image: /var/lib/minikube/build/build.1741060408
I0729 10:36:50.350327 2946958 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1741060408 --local dockerfile=/var/lib/minikube/build/build.1741060408 --output type=image,name=localhost/my-image:functional-788372
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:8fce42e97ae92f9002cb7adca53cbd2f2e79a486dc897a8504f7dbb60342c41c
#8 exporting manifest sha256:8fce42e97ae92f9002cb7adca53cbd2f2e79a486dc897a8504f7dbb60342c41c 0.0s done
#8 exporting config sha256:14c701f68c54b18d982e18a59ac45e1e8925914d7ccf4da2bd73c8cc84399242 0.0s done
#8 naming to localhost/my-image:functional-788372 done
#8 DONE 0.1s
I0729 10:36:52.575318 2946958 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1741060408 --local dockerfile=/var/lib/minikube/build/build.1741060408 --output type=image,name=localhost/my-image:functional-788372: (2.224944953s)
I0729 10:36:52.575408 2946958 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1741060408
I0729 10:36:52.585187 2946958 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1741060408.tar
I0729 10:36:52.597377 2946958 build_images.go:217] Built localhost/my-image:functional-788372 from /tmp/build.1741060408.tar
I0729 10:36:52.597405 2946958 build_images.go:133] succeeded building to: functional-788372
I0729 10:36:52.597411 2946958 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-788372
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image load --daemon docker.io/kicbase/echo-server:functional-788372 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 image load --daemon docker.io/kicbase/echo-server:functional-788372 --alsologtostderr: (1.370821109s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image load --daemon docker.io/kicbase/echo-server:functional-788372 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-788372
functional_test.go:244: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image load --daemon docker.io/kicbase/echo-server:functional-788372 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 image load --daemon docker.io/kicbase/echo-server:functional-788372 --alsologtostderr: (1.054064015s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1311: Took "371.020028ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1325: Took "81.615457ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1362: Took "419.289458ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1375: Took "69.769325ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image save docker.io/kicbase/echo-server:functional-788372 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-788372 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-788372 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-788372 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-788372 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2942669: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image rm docker.io/kicbase/echo-server:functional-788372 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-788372 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-788372 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [d42da27e-1588-4826-8c42-6f14eaf5668f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [d42da27e-1588-4826-8c42-6f14eaf5668f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004310091s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-788372
functional_test.go:423: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 image save --daemon docker.io/kicbase/echo-server:functional-788372 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-788372
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-788372 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.154.114 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-788372 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1433: (dbg) Run:  kubectl --context functional-788372 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-788372 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-65f5d5cc78-6j4cr" [ea7c817c-7d6b-4cb2-81f2-53af3031a645] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E0729 10:36:30.709925 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:36:30.715933 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:36:30.726201 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:36:30.746508 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:36:30.786804 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:36:30.867084 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:36:31.027534 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
helpers_test.go:344: "hello-node-65f5d5cc78-6j4cr" [ea7c817c-7d6b-4cb2-81f2-53af3031a645] Running
E0729 10:36:31.348350 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:36:31.989652 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:36:33.269894 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.004342815s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdany-port2091866820/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1722249396332553501" to /tmp/TestFunctionalparallelMountCmdany-port2091866820/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1722249396332553501" to /tmp/TestFunctionalparallelMountCmdany-port2091866820/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1722249396332553501" to /tmp/TestFunctionalparallelMountCmdany-port2091866820/001/test-1722249396332553501
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (403.994667ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 29 10:36 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 29 10:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 29 10:36 test-1722249396332553501
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh cat /mount-9p/test-1722249396332553501
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-788372 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f3dcccba-03b1-4ead-b669-10a66cd1b258] Pending
helpers_test.go:344: "busybox-mount" [f3dcccba-03b1-4ead-b669-10a66cd1b258] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f3dcccba-03b1-4ead-b669-10a66cd1b258] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f3dcccba-03b1-4ead-b669-10a66cd1b258] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004561845s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-788372 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdany-port2091866820/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 service list -o json
functional_test.go:1490: Took "548.144696ms" to run "out/minikube-linux-arm64 -p functional-788372 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.49.2:31638
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31638
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdspecific-port3457568541/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (509.379984ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdspecific-port3457568541/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-788372 ssh "sudo umount -f /mount-9p": exit status 1 (314.263435ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-788372 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdspecific-port3457568541/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4293775712/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4293775712/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4293775712/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Done: out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T" /mount1: (1.047525943s)
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-788372 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-788372 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4293775712/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4293775712/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-788372 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4293775712/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.04s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-788372
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-788372
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-788372
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-481480 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0729 10:37:11.673211 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:37:52.633938 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-481480 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m2.672495873s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (35.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- rollout status deployment/busybox
E0729 10:39:14.554781 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-481480 -- rollout status deployment/busybox: (32.70472282s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-5jvhv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-6th6d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-xvsq4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-5jvhv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-6th6d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-xvsq4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-5jvhv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-6th6d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-xvsq4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (35.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-5jvhv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-5jvhv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-6th6d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-6th6d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-xvsq4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-481480 -- exec busybox-fc5497c4f-xvsq4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-481480 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-481480 -v=7 --alsologtostderr: (22.832950416s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr: (1.463534713s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-481480 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-481480 status --output json -v=7 --alsologtostderr: (1.077545961s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp testdata/cp-test.txt ha-481480:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1057943213/001/cp-test_ha-481480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480:/home/docker/cp-test.txt ha-481480-m02:/home/docker/cp-test_ha-481480_ha-481480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m02 "sudo cat /home/docker/cp-test_ha-481480_ha-481480-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480:/home/docker/cp-test.txt ha-481480-m03:/home/docker/cp-test_ha-481480_ha-481480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m03 "sudo cat /home/docker/cp-test_ha-481480_ha-481480-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480:/home/docker/cp-test.txt ha-481480-m04:/home/docker/cp-test_ha-481480_ha-481480-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m04 "sudo cat /home/docker/cp-test_ha-481480_ha-481480-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp testdata/cp-test.txt ha-481480-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1057943213/001/cp-test_ha-481480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m02:/home/docker/cp-test.txt ha-481480:/home/docker/cp-test_ha-481480-m02_ha-481480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480 "sudo cat /home/docker/cp-test_ha-481480-m02_ha-481480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m02:/home/docker/cp-test.txt ha-481480-m03:/home/docker/cp-test_ha-481480-m02_ha-481480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m03 "sudo cat /home/docker/cp-test_ha-481480-m02_ha-481480-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m02:/home/docker/cp-test.txt ha-481480-m04:/home/docker/cp-test_ha-481480-m02_ha-481480-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m04 "sudo cat /home/docker/cp-test_ha-481480-m02_ha-481480-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp testdata/cp-test.txt ha-481480-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1057943213/001/cp-test_ha-481480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m03:/home/docker/cp-test.txt ha-481480:/home/docker/cp-test_ha-481480-m03_ha-481480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480 "sudo cat /home/docker/cp-test_ha-481480-m03_ha-481480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m03:/home/docker/cp-test.txt ha-481480-m02:/home/docker/cp-test_ha-481480-m03_ha-481480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m02 "sudo cat /home/docker/cp-test_ha-481480-m03_ha-481480-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m03:/home/docker/cp-test.txt ha-481480-m04:/home/docker/cp-test_ha-481480-m03_ha-481480-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m04 "sudo cat /home/docker/cp-test_ha-481480-m03_ha-481480-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp testdata/cp-test.txt ha-481480-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1057943213/001/cp-test_ha-481480-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m04:/home/docker/cp-test.txt ha-481480:/home/docker/cp-test_ha-481480-m04_ha-481480.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480 "sudo cat /home/docker/cp-test_ha-481480-m04_ha-481480.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m04:/home/docker/cp-test.txt ha-481480-m02:/home/docker/cp-test_ha-481480-m04_ha-481480-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m02 "sudo cat /home/docker/cp-test_ha-481480-m04_ha-481480-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 cp ha-481480-m04:/home/docker/cp-test.txt ha-481480-m03:/home/docker/cp-test_ha-481480-m04_ha-481480-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 ssh -n ha-481480-m03 "sudo cat /home/docker/cp-test_ha-481480-m04_ha-481480-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-481480 node stop m02 -v=7 --alsologtostderr: (12.226447188s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr: exit status 7 (727.327703ms)

                                                
                                                
-- stdout --
	ha-481480
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-481480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481480-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-481480-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:40:33.296200 2963499 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:40:33.296406 2963499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:40:33.296419 2963499 out.go:304] Setting ErrFile to fd 2...
	I0729 10:40:33.296424 2963499 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:40:33.296655 2963499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:40:33.296836 2963499 out.go:298] Setting JSON to false
	I0729 10:40:33.296881 2963499 mustload.go:65] Loading cluster: ha-481480
	I0729 10:40:33.296958 2963499 notify.go:220] Checking for updates...
	I0729 10:40:33.298239 2963499 config.go:182] Loaded profile config "ha-481480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:40:33.298268 2963499 status.go:255] checking status of ha-481480 ...
	I0729 10:40:33.298950 2963499 cli_runner.go:164] Run: docker container inspect ha-481480 --format={{.State.Status}}
	I0729 10:40:33.317750 2963499 status.go:330] ha-481480 host status = "Running" (err=<nil>)
	I0729 10:40:33.317786 2963499 host.go:66] Checking if "ha-481480" exists ...
	I0729 10:40:33.319075 2963499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481480
	I0729 10:40:33.351907 2963499 host.go:66] Checking if "ha-481480" exists ...
	I0729 10:40:33.352216 2963499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:40:33.352278 2963499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481480
	I0729 10:40:33.369857 2963499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36489 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/ha-481480/id_rsa Username:docker}
	I0729 10:40:33.465467 2963499 ssh_runner.go:195] Run: systemctl --version
	I0729 10:40:33.470533 2963499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:40:33.484537 2963499 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:40:33.544258 2963499 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:71 SystemTime:2024-07-29 10:40:33.529661849 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:40:33.544910 2963499 kubeconfig.go:125] found "ha-481480" server: "https://192.168.49.254:8443"
	I0729 10:40:33.544945 2963499 api_server.go:166] Checking apiserver status ...
	I0729 10:40:33.544987 2963499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:40:33.556994 2963499 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1582/cgroup
	I0729 10:40:33.566577 2963499 api_server.go:182] apiserver freezer: "11:freezer:/docker/8950066a84d673fb8d12320d89d9d3a84539362849456bb02a9c544734377c9f/kubepods/burstable/podbae33738dff5ba19a49647ea77dbf128/fda28af5dfc9b399e8fec1d1af794409db4991ff5fff9588dead0120132e616f"
	I0729 10:40:33.566654 2963499 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8950066a84d673fb8d12320d89d9d3a84539362849456bb02a9c544734377c9f/kubepods/burstable/podbae33738dff5ba19a49647ea77dbf128/fda28af5dfc9b399e8fec1d1af794409db4991ff5fff9588dead0120132e616f/freezer.state
	I0729 10:40:33.576435 2963499 api_server.go:204] freezer state: "THAWED"
	I0729 10:40:33.576466 2963499 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0729 10:40:33.584404 2963499 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0729 10:40:33.584432 2963499 status.go:422] ha-481480 apiserver status = Running (err=<nil>)
	I0729 10:40:33.584443 2963499 status.go:257] ha-481480 status: &{Name:ha-481480 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:40:33.584461 2963499 status.go:255] checking status of ha-481480-m02 ...
	I0729 10:40:33.584769 2963499 cli_runner.go:164] Run: docker container inspect ha-481480-m02 --format={{.State.Status}}
	I0729 10:40:33.601617 2963499 status.go:330] ha-481480-m02 host status = "Stopped" (err=<nil>)
	I0729 10:40:33.601643 2963499 status.go:343] host is not running, skipping remaining checks
	I0729 10:40:33.601650 2963499 status.go:257] ha-481480-m02 status: &{Name:ha-481480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:40:33.601675 2963499 status.go:255] checking status of ha-481480-m03 ...
	I0729 10:40:33.602040 2963499 cli_runner.go:164] Run: docker container inspect ha-481480-m03 --format={{.State.Status}}
	I0729 10:40:33.618075 2963499 status.go:330] ha-481480-m03 host status = "Running" (err=<nil>)
	I0729 10:40:33.618101 2963499 host.go:66] Checking if "ha-481480-m03" exists ...
	I0729 10:40:33.618480 2963499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481480-m03
	I0729 10:40:33.642982 2963499 host.go:66] Checking if "ha-481480-m03" exists ...
	I0729 10:40:33.643442 2963499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:40:33.643567 2963499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481480-m03
	I0729 10:40:33.662012 2963499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36499 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/ha-481480-m03/id_rsa Username:docker}
	I0729 10:40:33.756954 2963499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:40:33.769356 2963499 kubeconfig.go:125] found "ha-481480" server: "https://192.168.49.254:8443"
	I0729 10:40:33.769383 2963499 api_server.go:166] Checking apiserver status ...
	I0729 10:40:33.769431 2963499 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:40:33.780920 2963499 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1416/cgroup
	I0729 10:40:33.790351 2963499 api_server.go:182] apiserver freezer: "11:freezer:/docker/f905b5192e78755f5d4f8fc5a47a5a7aa7d414fcdfd499be2317d329988ca05f/kubepods/burstable/podf92341b33949fdc3a23862a8cce907ae/2163923c65b760da25f884dea8c52e65ce2e69e11cbc8643430ab3d9ea850832"
	I0729 10:40:33.790431 2963499 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f905b5192e78755f5d4f8fc5a47a5a7aa7d414fcdfd499be2317d329988ca05f/kubepods/burstable/podf92341b33949fdc3a23862a8cce907ae/2163923c65b760da25f884dea8c52e65ce2e69e11cbc8643430ab3d9ea850832/freezer.state
	I0729 10:40:33.799603 2963499 api_server.go:204] freezer state: "THAWED"
	I0729 10:40:33.799686 2963499 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0729 10:40:33.807312 2963499 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0729 10:40:33.807341 2963499 status.go:422] ha-481480-m03 apiserver status = Running (err=<nil>)
	I0729 10:40:33.807351 2963499 status.go:257] ha-481480-m03 status: &{Name:ha-481480-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:40:33.807368 2963499 status.go:255] checking status of ha-481480-m04 ...
	I0729 10:40:33.807660 2963499 cli_runner.go:164] Run: docker container inspect ha-481480-m04 --format={{.State.Status}}
	I0729 10:40:33.824221 2963499 status.go:330] ha-481480-m04 host status = "Running" (err=<nil>)
	I0729 10:40:33.824251 2963499 host.go:66] Checking if "ha-481480-m04" exists ...
	I0729 10:40:33.824553 2963499 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481480-m04
	I0729 10:40:33.840993 2963499 host.go:66] Checking if "ha-481480-m04" exists ...
	I0729 10:40:33.841377 2963499 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:40:33.841457 2963499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481480-m04
	I0729 10:40:33.858385 2963499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36504 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/ha-481480-m04/id_rsa Username:docker}
	I0729 10:40:33.948794 2963499 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:40:33.960645 2963499 status.go:257] ha-481480-m04 status: &{Name:ha-481480-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-481480 node start m02 -v=7 --alsologtostderr: (17.548315325s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr: (1.022814189s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (151.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-481480 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-481480 -v=7 --alsologtostderr
E0729 10:41:07.263003 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:07.268557 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:07.278828 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:07.299046 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:07.339427 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:07.420515 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:07.582436 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:07.902939 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:08.543774 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:09.824722 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:12.385792 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:17.506570 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:27.746838 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:30.708402 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-481480 -v=7 --alsologtostderr: (37.240200768s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-481480 --wait=true -v=7 --alsologtostderr
E0729 10:41:48.227037 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 10:41:58.395848 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:42:29.187749 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-481480 --wait=true -v=7 --alsologtostderr: (1m53.975746333s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-481480
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (151.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-481480 node delete m03 -v=7 --alsologtostderr: (10.369921147s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 stop -v=7 --alsologtostderr
E0729 10:43:51.107950 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-481480 stop -v=7 --alsologtostderr: (35.890088239s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr: exit status 7 (112.554758ms)

                                                
                                                
-- stdout --
	ha-481480
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481480-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481480-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:44:13.125687 2977814 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:44:13.125854 2977814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:13.125868 2977814 out.go:304] Setting ErrFile to fd 2...
	I0729 10:44:13.125875 2977814 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:44:13.126139 2977814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:44:13.126359 2977814 out.go:298] Setting JSON to false
	I0729 10:44:13.126416 2977814 mustload.go:65] Loading cluster: ha-481480
	I0729 10:44:13.126521 2977814 notify.go:220] Checking for updates...
	I0729 10:44:13.126846 2977814 config.go:182] Loaded profile config "ha-481480": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:44:13.126857 2977814 status.go:255] checking status of ha-481480 ...
	I0729 10:44:13.127318 2977814 cli_runner.go:164] Run: docker container inspect ha-481480 --format={{.State.Status}}
	I0729 10:44:13.143672 2977814 status.go:330] ha-481480 host status = "Stopped" (err=<nil>)
	I0729 10:44:13.143697 2977814 status.go:343] host is not running, skipping remaining checks
	I0729 10:44:13.143704 2977814 status.go:257] ha-481480 status: &{Name:ha-481480 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:44:13.143727 2977814 status.go:255] checking status of ha-481480-m02 ...
	I0729 10:44:13.144072 2977814 cli_runner.go:164] Run: docker container inspect ha-481480-m02 --format={{.State.Status}}
	I0729 10:44:13.160884 2977814 status.go:330] ha-481480-m02 host status = "Stopped" (err=<nil>)
	I0729 10:44:13.160906 2977814 status.go:343] host is not running, skipping remaining checks
	I0729 10:44:13.160913 2977814 status.go:257] ha-481480-m02 status: &{Name:ha-481480-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:44:13.160952 2977814 status.go:255] checking status of ha-481480-m04 ...
	I0729 10:44:13.161264 2977814 cli_runner.go:164] Run: docker container inspect ha-481480-m04 --format={{.State.Status}}
	I0729 10:44:13.185608 2977814 status.go:330] ha-481480-m04 host status = "Stopped" (err=<nil>)
	I0729 10:44:13.185635 2977814 status.go:343] host is not running, skipping remaining checks
	I0729 10:44:13.185643 2977814 status.go:257] ha-481480-m04 status: &{Name:ha-481480-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-481480 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-481480 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.656554846s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (46.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-481480 --control-plane -v=7 --alsologtostderr
E0729 10:46:07.262760 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-481480 --control-plane -v=7 --alsologtostderr: (45.839205208s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-481480 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (46.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (62.6s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-999518 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0729 10:46:30.708556 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
E0729 10:46:34.948978 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-999518 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m2.591390389s)
--- PASS: TestJSONOutput/start/Command (62.60s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-999518 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-999518 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-999518 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-999518 --output=json --user=testUser: (5.767133956s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-585460 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-585460 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (72.770062ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d98d4f53-3034-42bb-93f9-ab6fe0ebcf69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-585460] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2ce05c4f-372c-47c3-8a2b-3ae274076f07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19337"}}
	{"specversion":"1.0","id":"d62661b9-b93c-4748-a8f5-5347d7ccd326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4d62b0d2-e541-4cb0-b180-3240a687a5d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig"}}
	{"specversion":"1.0","id":"15aef7a3-dabe-4924-97d1-a6ac2d75a2b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube"}}
	{"specversion":"1.0","id":"2eb86053-475b-4448-9093-02b049012053","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4b6850a2-2425-44ab-84e4-41b49089a8c8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91ab5aa8-4fd9-42d4-a798-1c7a8e2dbee5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-585460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-585460
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.31s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-417184 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-417184 --network=: (36.210437112s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-417184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-417184
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-417184: (2.071635642s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.31s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.09s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-429742 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-429742 --network=bridge: (31.138869995s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-429742" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-429742
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-429742: (1.922055265s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.09s)

                                                
                                    
x
+
TestKicExistingNetwork (37.14s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-181296 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-181296 --network=existing-network: (35.013311366s)
helpers_test.go:175: Cleaning up "existing-network-181296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-181296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-181296: (1.970840362s)
--- PASS: TestKicExistingNetwork (37.14s)

                                                
                                    
x
+
TestKicCustomSubnet (34.98s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-391175 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-391175 --subnet=192.168.60.0/24: (32.869835756s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-391175 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-391175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-391175
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-391175: (2.089884869s)
--- PASS: TestKicCustomSubnet (34.98s)

                                                
                                    
x
+
TestKicStaticIP (36.42s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-043095 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-043095 --static-ip=192.168.200.200: (34.120308517s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-043095 ip
helpers_test.go:175: Cleaning up "static-ip-043095" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-043095
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-043095: (2.135264827s)
--- PASS: TestKicStaticIP (36.42s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-757924 --driver=docker  --container-runtime=containerd
E0729 10:51:07.262687 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-757924 --driver=docker  --container-runtime=containerd: (29.014458542s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-761009 --driver=docker  --container-runtime=containerd
E0729 10:51:30.708609 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-761009 --driver=docker  --container-runtime=containerd: (35.223393541s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-757924
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-761009
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-761009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-761009
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-761009: (2.019253148s)
helpers_test.go:175: Cleaning up "first-757924" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-757924
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-757924: (2.249229848s)
--- PASS: TestMinikubeProfile (69.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-636894 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-636894 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.52440016s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-636894 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-649875 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-649875 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.570560204s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-649875 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-636894 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-636894 --alsologtostderr -v=5: (1.577778996s)
--- PASS: TestMountStart/serial/DeleteFirst (1.58s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-649875 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-649875
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-649875: (1.218515106s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-649875
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-649875: (6.294470642s)
--- PASS: TestMountStart/serial/RestartStopped (7.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-649875 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (75.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-856659 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0729 10:52:53.756666 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-856659 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m14.639690987s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (75.20s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-856659 -- rollout status deployment/busybox: (16.256967991s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-2jd28 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-kpdsb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-2jd28 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-kpdsb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-2jd28 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-kpdsb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.17s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-2jd28 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-2jd28 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-kpdsb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-856659 -- exec busybox-fc5497c4f-kpdsb -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-856659 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-856659 -v 3 --alsologtostderr: (14.790243431s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-856659 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp testdata/cp-test.txt multinode-856659:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2276826938/001/cp-test_multinode-856659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659:/home/docker/cp-test.txt multinode-856659-m02:/home/docker/cp-test_multinode-856659_multinode-856659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m02 "sudo cat /home/docker/cp-test_multinode-856659_multinode-856659-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659:/home/docker/cp-test.txt multinode-856659-m03:/home/docker/cp-test_multinode-856659_multinode-856659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m03 "sudo cat /home/docker/cp-test_multinode-856659_multinode-856659-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp testdata/cp-test.txt multinode-856659-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2276826938/001/cp-test_multinode-856659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659-m02:/home/docker/cp-test.txt multinode-856659:/home/docker/cp-test_multinode-856659-m02_multinode-856659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659 "sudo cat /home/docker/cp-test_multinode-856659-m02_multinode-856659.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659-m02:/home/docker/cp-test.txt multinode-856659-m03:/home/docker/cp-test_multinode-856659-m02_multinode-856659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m03 "sudo cat /home/docker/cp-test_multinode-856659-m02_multinode-856659-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp testdata/cp-test.txt multinode-856659-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2276826938/001/cp-test_multinode-856659-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659-m03:/home/docker/cp-test.txt multinode-856659:/home/docker/cp-test_multinode-856659-m03_multinode-856659.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659 "sudo cat /home/docker/cp-test_multinode-856659-m03_multinode-856659.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 cp multinode-856659-m03:/home/docker/cp-test.txt multinode-856659-m02:/home/docker/cp-test_multinode-856659-m03_multinode-856659-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 ssh -n multinode-856659-m02 "sudo cat /home/docker/cp-test_multinode-856659-m03_multinode-856659-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-856659 node stop m03: (1.219790195s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-856659 status: exit status 7 (503.118819ms)

                                                
                                                
-- stdout --
	multinode-856659
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-856659-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-856659-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-856659 status --alsologtostderr: exit status 7 (508.973647ms)

                                                
                                                
-- stdout --
	multinode-856659
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-856659-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-856659-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:54:22.462360 3032008 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:54:22.462687 3032008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:54:22.462702 3032008 out.go:304] Setting ErrFile to fd 2...
	I0729 10:54:22.462708 3032008 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:54:22.462951 3032008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:54:22.463134 3032008 out.go:298] Setting JSON to false
	I0729 10:54:22.463167 3032008 mustload.go:65] Loading cluster: multinode-856659
	I0729 10:54:22.463557 3032008 config.go:182] Loaded profile config "multinode-856659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:54:22.463575 3032008 status.go:255] checking status of multinode-856659 ...
	I0729 10:54:22.464115 3032008 cli_runner.go:164] Run: docker container inspect multinode-856659 --format={{.State.Status}}
	I0729 10:54:22.464486 3032008 notify.go:220] Checking for updates...
	I0729 10:54:22.482280 3032008 status.go:330] multinode-856659 host status = "Running" (err=<nil>)
	I0729 10:54:22.482321 3032008 host.go:66] Checking if "multinode-856659" exists ...
	I0729 10:54:22.482607 3032008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-856659
	I0729 10:54:22.500415 3032008 host.go:66] Checking if "multinode-856659" exists ...
	I0729 10:54:22.500832 3032008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:54:22.500913 3032008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-856659
	I0729 10:54:22.537075 3032008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36609 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/multinode-856659/id_rsa Username:docker}
	I0729 10:54:22.628964 3032008 ssh_runner.go:195] Run: systemctl --version
	I0729 10:54:22.633159 3032008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:54:22.644768 3032008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 10:54:22.694813 3032008 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-07-29 10:54:22.685273184 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 10:54:22.695591 3032008 kubeconfig.go:125] found "multinode-856659" server: "https://192.168.58.2:8443"
	I0729 10:54:22.695711 3032008 api_server.go:166] Checking apiserver status ...
	I0729 10:54:22.695777 3032008 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0729 10:54:22.707101 3032008 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1481/cgroup
	I0729 10:54:22.717403 3032008 api_server.go:182] apiserver freezer: "11:freezer:/docker/b869d8236f21f845aeb1243af70e840b84dc7e795d2766061e346903c693578e/kubepods/burstable/pod6fef4b123855cce181cbbd3845584474/c801aa89395a35eebd04b3514299ded6390385a29331c2688e65b1bcb516d113"
	I0729 10:54:22.717490 3032008 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b869d8236f21f845aeb1243af70e840b84dc7e795d2766061e346903c693578e/kubepods/burstable/pod6fef4b123855cce181cbbd3845584474/c801aa89395a35eebd04b3514299ded6390385a29331c2688e65b1bcb516d113/freezer.state
	I0729 10:54:22.729311 3032008 api_server.go:204] freezer state: "THAWED"
	I0729 10:54:22.729347 3032008 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0729 10:54:22.737315 3032008 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0729 10:54:22.737349 3032008 status.go:422] multinode-856659 apiserver status = Running (err=<nil>)
	I0729 10:54:22.737379 3032008 status.go:257] multinode-856659 status: &{Name:multinode-856659 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:54:22.737401 3032008 status.go:255] checking status of multinode-856659-m02 ...
	I0729 10:54:22.737750 3032008 cli_runner.go:164] Run: docker container inspect multinode-856659-m02 --format={{.State.Status}}
	I0729 10:54:22.754087 3032008 status.go:330] multinode-856659-m02 host status = "Running" (err=<nil>)
	I0729 10:54:22.754115 3032008 host.go:66] Checking if "multinode-856659-m02" exists ...
	I0729 10:54:22.754468 3032008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-856659-m02
	I0729 10:54:22.771187 3032008 host.go:66] Checking if "multinode-856659-m02" exists ...
	I0729 10:54:22.771493 3032008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0729 10:54:22.771532 3032008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-856659-m02
	I0729 10:54:22.798507 3032008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36614 SSHKeyPath:/home/jenkins/minikube-integration/19337-2904404/.minikube/machines/multinode-856659-m02/id_rsa Username:docker}
	I0729 10:54:22.888962 3032008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0729 10:54:22.900938 3032008 status.go:257] multinode-856659-m02 status: &{Name:multinode-856659-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:54:22.900974 3032008 status.go:255] checking status of multinode-856659-m03 ...
	I0729 10:54:22.901298 3032008 cli_runner.go:164] Run: docker container inspect multinode-856659-m03 --format={{.State.Status}}
	I0729 10:54:22.921031 3032008 status.go:330] multinode-856659-m03 host status = "Stopped" (err=<nil>)
	I0729 10:54:22.921055 3032008 status.go:343] host is not running, skipping remaining checks
	I0729 10:54:22.921063 3032008 status.go:257] multinode-856659-m03 status: &{Name:multinode-856659-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-856659 node start m03 -v=7 --alsologtostderr: (8.538791406s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (86.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-856659
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-856659
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-856659: (25.113848622s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-856659 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-856659 --wait=true -v=8 --alsologtostderr: (1m1.483289563s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-856659
--- PASS: TestMultiNode/serial/RestartKeepsNodes (86.73s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-856659 node delete m03: (5.149523363s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 stop
E0729 10:56:07.262925 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-856659 stop: (23.800420826s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-856659 status: exit status 7 (89.227868ms)

                                                
                                                
-- stdout --
	multinode-856659
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-856659-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-856659 status --alsologtostderr: exit status 7 (91.740638ms)

                                                
                                                
-- stdout --
	multinode-856659
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-856659-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 10:56:28.786264 3039991 out.go:291] Setting OutFile to fd 1 ...
	I0729 10:56:28.786662 3039991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:56:28.786682 3039991 out.go:304] Setting ErrFile to fd 2...
	I0729 10:56:28.786690 3039991 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 10:56:28.786955 3039991 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 10:56:28.787143 3039991 out.go:298] Setting JSON to false
	I0729 10:56:28.787184 3039991 mustload.go:65] Loading cluster: multinode-856659
	I0729 10:56:28.787272 3039991 notify.go:220] Checking for updates...
	I0729 10:56:28.788217 3039991 config.go:182] Loaded profile config "multinode-856659": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 10:56:28.788242 3039991 status.go:255] checking status of multinode-856659 ...
	I0729 10:56:28.788765 3039991 cli_runner.go:164] Run: docker container inspect multinode-856659 --format={{.State.Status}}
	I0729 10:56:28.805990 3039991 status.go:330] multinode-856659 host status = "Stopped" (err=<nil>)
	I0729 10:56:28.806013 3039991 status.go:343] host is not running, skipping remaining checks
	I0729 10:56:28.806021 3039991 status.go:257] multinode-856659 status: &{Name:multinode-856659 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0729 10:56:28.806064 3039991 status.go:255] checking status of multinode-856659-m02 ...
	I0729 10:56:28.806364 3039991 cli_runner.go:164] Run: docker container inspect multinode-856659-m02 --format={{.State.Status}}
	I0729 10:56:28.835550 3039991 status.go:330] multinode-856659-m02 host status = "Stopped" (err=<nil>)
	I0729 10:56:28.835575 3039991 status.go:343] host is not running, skipping remaining checks
	I0729 10:56:28.835583 3039991 status.go:257] multinode-856659-m02 status: &{Name:multinode-856659-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-856659 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0729 10:56:30.707979 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-856659 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.371927867s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-856659 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.04s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (32.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-856659
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-856659-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-856659-m02 --driver=docker  --container-runtime=containerd: exit status 14 (84.641802ms)

                                                
                                                
-- stdout --
	* [multinode-856659-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-856659-m02' is duplicated with machine name 'multinode-856659-m02' in profile 'multinode-856659'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-856659-m03 --driver=docker  --container-runtime=containerd
E0729 10:57:30.309702 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-856659-m03 --driver=docker  --container-runtime=containerd: (30.361606821s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-856659
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-856659: exit status 80 (320.492842ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-856659 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-856659-m03 already exists in multinode-856659-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-856659-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-856659-m03: (1.981875773s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (32.80s)

                                                
                                    
x
+
TestPreload (108.05s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-072982 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-072982 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m11.901978875s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-072982 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-072982 image pull gcr.io/k8s-minikube/busybox: (1.187601163s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-072982
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-072982: (12.059440586s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-072982 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-072982 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.034755868s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-072982 image list
helpers_test.go:175: Cleaning up "test-preload-072982" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-072982
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-072982: (2.474493285s)
--- PASS: TestPreload (108.05s)

                                                
                                    
x
+
TestScheduledStopUnix (110.71s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-152479 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-152479 --memory=2048 --driver=docker  --container-runtime=containerd: (34.332035193s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-152479 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-152479 -n scheduled-stop-152479
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-152479 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-152479 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-152479 -n scheduled-stop-152479
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-152479
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-152479 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0729 11:01:07.263143 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-152479
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-152479: exit status 7 (67.891616ms)

                                                
                                                
-- stdout --
	scheduled-stop-152479
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-152479 -n scheduled-stop-152479
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-152479 -n scheduled-stop-152479: exit status 7 (70.032218ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-152479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-152479
E0729 11:01:30.708974 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-152479: (4.787242022s)
--- PASS: TestScheduledStopUnix (110.71s)

                                                
                                    
x
+
TestInsufficientStorage (11.02s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-815639 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-815639 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.573952418s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"71e7d393-6ee6-426e-9d8a-7f9f75dad6fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-815639] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dae234ca-fadd-4ee0-8b27-b780cbc4a5c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19337"}}
	{"specversion":"1.0","id":"a6aced2d-973d-48bc-811d-2e25404326fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c4e41c3e-bbe8-4255-bd49-050b1f3e67f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig"}}
	{"specversion":"1.0","id":"33fa20c2-d006-4fc5-b748-16ee0f3224a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube"}}
	{"specversion":"1.0","id":"0ca9ad02-6bc4-4508-897e-45bf52badf6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9e613c41-e324-4c67-9b4b-40b19317c197","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8f2ea35d-0b61-49a6-aa93-289e551a0924","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f91815d4-0ee5-416e-8207-bbfa2dbb0291","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0efcfb47-e5ab-4827-ad82-61bf0543c978","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ece2d0c9-ec62-4fcb-a995-a8dc83f5c58b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a41e0db4-3c25-432d-a370-a64e87eab303","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-815639\" primary control-plane node in \"insufficient-storage-815639\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4e84d8cc-425d-4c94-b919-5dcb5b7adb7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1721902582-19326 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc19c99c-3a19-4dd6-9975-86d5b30f879e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"fa9fd45c-afd1-4fdc-abc3-da9d5285692d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-815639 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-815639 --output=json --layout=cluster: exit status 7 (288.244354ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-815639","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-815639","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:01:44.288750 3058577 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-815639" does not appear in /home/jenkins/minikube-integration/19337-2904404/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-815639 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-815639 --output=json --layout=cluster: exit status 7 (282.879862ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-815639","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-815639","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0729 11:01:44.572993 3058637 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-815639" does not appear in /home/jenkins/minikube-integration/19337-2904404/kubeconfig
	E0729 11:01:44.583175 3058637 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/insufficient-storage-815639/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-815639" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-815639
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-815639: (1.877846118s)
--- PASS: TestInsufficientStorage (11.02s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.79s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3757161544 start -p running-upgrade-537015 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3757161544 start -p running-upgrade-537015 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (48.579562551s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-537015 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-537015 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (41.951915769s)
helpers_test.go:175: Cleaning up "running-upgrade-537015" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-537015
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-537015: (3.161277476s)
--- PASS: TestRunningBinaryUpgrade (94.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (362.34s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-759427 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-759427 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.128694764s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-759427
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-759427: (1.215610831s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-759427 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-759427 status --format={{.Host}}: exit status 7 (69.253445ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-759427 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-759427 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m43.509663026s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-759427 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-759427 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-759427 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (77.898027ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759427] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-759427
	    minikube start -p kubernetes-upgrade-759427 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7594272 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-759427 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-759427 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-759427 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.8764837s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-759427" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-759427
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-759427: (2.335514463s)
--- PASS: TestKubernetesUpgrade (362.34s)

                                                
                                    
x
+
TestMissingContainerUpgrade (154.92s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1460381899 start -p missing-upgrade-399084 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1460381899 start -p missing-upgrade-399084 --memory=2200 --driver=docker  --container-runtime=containerd: (1m19.640903132s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-399084
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-399084: (10.304224561s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-399084
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-399084 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-399084 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.443385195s)
helpers_test.go:175: Cleaning up "missing-upgrade-399084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-399084
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-399084: (2.29445174s)
--- PASS: TestMissingContainerUpgrade (154.92s)

                                                
                                    
x
+
TestPause/serial/Start (72.68s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-980896 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-980896 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m12.68487447s)
--- PASS: TestPause/serial/Start (72.68s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-344981 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-344981 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (103.469938ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-344981] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-344981 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-344981 --driver=docker  --container-runtime=containerd: (40.663481412s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-344981 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-344981 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-344981 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.864465015s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-344981 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-344981 status -o json: exit status 2 (322.668399ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-344981","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-344981
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-344981: (1.966035993s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-344981 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-344981 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.868027757s)
--- PASS: TestNoKubernetes/serial/Start (8.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-344981 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-344981 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.884623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.99s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-344981
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-344981: (1.331951729s)
--- PASS: TestNoKubernetes/serial/Stop (1.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-344981 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-344981 --driver=docker  --container-runtime=containerd: (6.546037653s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-344981 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-344981 "sudo systemctl is-active --quiet service kubelet": exit status 1 (262.602049ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.55s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-980896 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-980896 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.523040273s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.55s)

                                                
                                    
x
+
TestPause/serial/Pause (1.32s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-980896 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-980896 --alsologtostderr -v=5: (1.315657819s)
--- PASS: TestPause/serial/Pause (1.32s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-980896 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-980896 --output=json --layout=cluster: exit status 2 (418.661615ms)

                                                
                                                
-- stdout --
	{"Name":"pause-980896","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-980896","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-980896 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.47s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-980896 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-980896 --alsologtostderr -v=5: (1.468589898s)
--- PASS: TestPause/serial/PauseAgain (1.47s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.22s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-980896 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-980896 --alsologtostderr -v=5: (3.22186692s)
--- PASS: TestPause/serial/DeletePaused (3.22s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-980896
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-980896: exit status 1 (24.369234ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-980896: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.16s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (106.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4234300543 start -p stopped-upgrade-076340 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0729 11:06:07.267081 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4234300543 start -p stopped-upgrade-076340 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (45.737963762s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4234300543 -p stopped-upgrade-076340 stop
E0729 11:06:30.708444 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4234300543 -p stopped-upgrade-076340 stop: (19.937713404s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-076340 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-076340 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.446685173s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (106.12s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-076340
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-076340: (1.021239041s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-245719 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-245719 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (233.131562ms)

                                                
                                                
-- stdout --
	* [false-245719] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19337
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0729 11:09:21.148933 3098551 out.go:291] Setting OutFile to fd 1 ...
	I0729 11:09:21.149217 3098551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:09:21.149248 3098551 out.go:304] Setting ErrFile to fd 2...
	I0729 11:09:21.149268 3098551 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0729 11:09:21.149614 3098551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19337-2904404/.minikube/bin
	I0729 11:09:21.150146 3098551 out.go:298] Setting JSON to false
	I0729 11:09:21.151390 3098551 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":67912,"bootTime":1722183450,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1065-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0729 11:09:21.151508 3098551 start.go:139] virtualization:  
	I0729 11:09:21.156110 3098551 out.go:177] * [false-245719] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0729 11:09:21.158725 3098551 notify.go:220] Checking for updates...
	I0729 11:09:21.158695 3098551 out.go:177]   - MINIKUBE_LOCATION=19337
	I0729 11:09:21.161929 3098551 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0729 11:09:21.164690 3098551 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19337-2904404/kubeconfig
	I0729 11:09:21.166858 3098551 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19337-2904404/.minikube
	I0729 11:09:21.169010 3098551 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0729 11:09:21.171580 3098551 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0729 11:09:21.174719 3098551 config.go:182] Loaded profile config "force-systemd-flag-442512": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.30.3
	I0729 11:09:21.174888 3098551 driver.go:392] Setting default libvirt URI to qemu:///system
	I0729 11:09:21.201419 3098551 docker.go:123] docker version: linux-27.1.1:Docker Engine - Community
	I0729 11:09:21.201546 3098551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0729 11:09:21.297238 3098551 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-07-29 11:09:21.286858599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1065-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214900736 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41 Expected:2bf793ef6dc9a18e00cb12efb64355c2c9d5eb41} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0729 11:09:21.297365 3098551 docker.go:307] overlay module found
	I0729 11:09:21.301033 3098551 out.go:177] * Using the docker driver based on user configuration
	I0729 11:09:21.303242 3098551 start.go:297] selected driver: docker
	I0729 11:09:21.303261 3098551 start.go:901] validating driver "docker" against <nil>
	I0729 11:09:21.303281 3098551 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0729 11:09:21.306169 3098551 out.go:177] 
	W0729 11:09:21.308210 3098551 out.go:239] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0729 11:09:21.310632 3098551 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-245719 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-245719" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-245719

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-245719"

                                                
                                                
----------------------- debugLogs end: false-245719 [took: 4.388843095s] --------------------------------
helpers_test.go:175: Cleaning up "false-245719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-245719
--- PASS: TestNetworkPlugins/group/false (4.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (164.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-398652 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0729 11:11:07.263328 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 11:11:30.708383 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-398652 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m44.18553487s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (164.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-707151 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-707151 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (1m16.8106535s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-398652 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8bb77619-835e-47b8-9cf3-d009b6e989cc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8bb77619-835e-47b8-9cf3-d009b6e989cc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.00759094s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-398652 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-398652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-398652 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092127127s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-398652 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-398652 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-398652 --alsologtostderr -v=3: (12.651733175s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-398652 -n old-k8s-version-398652
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-398652 -n old-k8s-version-398652: exit status 7 (87.007779ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-398652 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-707151 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b64ac7e1-8a57-48f0-a6b7-cd316f5607e7] Pending
helpers_test.go:344: "busybox" [b64ac7e1-8a57-48f0-a6b7-cd316f5607e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b64ac7e1-8a57-48f0-a6b7-cd316f5607e7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004845511s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-707151 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-707151 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-707151 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.102562371s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-707151 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-707151 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-707151 --alsologtostderr -v=3: (12.261577649s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-707151 -n no-preload-707151
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-707151 -n no-preload-707151: exit status 7 (79.95179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-707151 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-707151 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0729 11:16:07.263367 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 11:16:30.708348 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-707151 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (4m26.353116067s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-707151 -n no-preload-707151
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-5gtk6" [680d516b-b765-4b43-a12f-3ba3addf2098] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003797825s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-5gtk6" [680d516b-b765-4b43-a12f-3ba3addf2098] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003791896s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-707151 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-707151 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-707151 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-707151 -n no-preload-707151
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-707151 -n no-preload-707151: exit status 2 (363.770006ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-707151 -n no-preload-707151
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-707151 -n no-preload-707151: exit status 2 (341.601297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-707151 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-707151 -n no-preload-707151
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-707151 -n no-preload-707151
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (63.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-483052 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-483052 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m3.104837759s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (63.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-949wc" [22cf327c-7755-4ceb-8861-c1c5d8e28377] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004876946s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-949wc" [22cf327c-7755-4ceb-8861-c1c5d8e28377] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004647227s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-398652 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-398652 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-398652 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-398652 --alsologtostderr -v=1: (1.088054717s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-398652 -n old-k8s-version-398652
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-398652 -n old-k8s-version-398652: exit status 2 (514.142133ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-398652 -n old-k8s-version-398652
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-398652 -n old-k8s-version-398652: exit status 2 (419.695541ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-398652 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-398652 --alsologtostderr -v=1: (1.269117275s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-398652 -n old-k8s-version-398652
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-398652 -n old-k8s-version-398652
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-187311 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-187311 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (1m12.011652543s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (72.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-483052 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6ade6e2b-ab13-4639-8d0e-8c21472d98fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6ade6e2b-ab13-4639-8d0e-8c21472d98fe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004042948s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-483052 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-483052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-483052 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.160819642s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-483052 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-483052 --alsologtostderr -v=3
E0729 11:21:07.262806 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-483052 --alsologtostderr -v=3: (12.348643444s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-483052 -n embed-certs-483052
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-483052 -n embed-certs-483052: exit status 7 (71.964778ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-483052 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (291.8s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-483052 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0729 11:21:30.708666 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-483052 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m51.421871339s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-483052 -n embed-certs-483052
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (291.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-187311 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [10711c20-9a0b-4b92-8be3-4ca64254e6e2] Pending
helpers_test.go:344: "busybox" [10711c20-9a0b-4b92-8be3-4ca64254e6e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [10711c20-9a0b-4b92-8be3-4ca64254e6e2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004462944s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-187311 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-187311 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-187311 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.065783912s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-187311 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-187311 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-187311 --alsologtostderr -v=3: (12.10438334s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311: exit status 7 (70.46996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-187311 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-187311 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3
E0729 11:23:37.123175 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:37.128317 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:37.138597 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:37.158914 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:37.199233 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:37.279679 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:37.440057 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:37.760451 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:38.401068 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:39.681368 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:42.242412 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:47.362980 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:23:57.603855 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:24:18.084335 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:24:46.116753 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:46.122933 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:46.133281 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:46.153794 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:46.194084 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:46.274396 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:46.434840 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:46.755148 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:47.396159 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:48.677246 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:51.238186 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:56.358890 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:24:59.044568 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
E0729 11:25:06.600034 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:25:27.081021 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
E0729 11:26:07.262909 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
E0729 11:26:08.041259 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-187311 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.30.3: (4m27.466720883s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7lz4l" [9e64f906-a6fd-4569-9593-af498a7870d3] Running
E0729 11:26:13.758818 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003512834s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-7lz4l" [9e64f906-a6fd-4569-9593-af498a7870d3] Running
E0729 11:26:20.964903 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00436854s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-483052 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-483052 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-483052 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-483052 -n embed-certs-483052
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-483052 -n embed-certs-483052: exit status 2 (300.691608ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-483052 -n embed-certs-483052
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-483052 -n embed-certs-483052: exit status 2 (321.400206ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-483052 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-483052 -n embed-certs-483052
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-483052 -n embed-certs-483052
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-103836 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0729 11:26:30.708597 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-103836 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (39.389901103s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-prrtb" [f3667c8b-f9ca-420b-8bce-7e60cb91807b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004629463s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-prrtb" [f3667c8b-f9ca-420b-8bce-7e60cb91807b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00495705s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-187311 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-187311 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-585640e9
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-187311 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311: exit status 2 (377.808325ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311: exit status 2 (555.935248ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-187311 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-187311 -n default-k8s-diff-port-187311
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m15.819478336s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.82s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-103836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-103836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.912133843s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.91s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-103836 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-103836 --alsologtostderr -v=3: (3.642335767s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-103836 -n newest-cni-103836
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-103836 -n newest-cni-103836: exit status 7 (114.154959ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-103836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-103836 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0
E0729 11:27:29.961418 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-103836 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0-beta.0: (22.072636601s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-103836 -n newest-cni-103836
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-103836 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240715-f6ad1f6e
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240719-e7903573
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-103836 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-103836 --alsologtostderr -v=1: (1.164586974s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-103836 -n newest-cni-103836
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-103836 -n newest-cni-103836: exit status 2 (481.682615ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-103836 -n newest-cni-103836
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-103836 -n newest-cni-103836: exit status 2 (490.205984ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-103836 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p newest-cni-103836 --alsologtostderr -v=1: (1.100744387s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-103836 -n newest-cni-103836
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-103836 -n newest-cni-103836
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m9.666132363s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-245719 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-245719 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-56zpl" [9e7df1fe-5fc7-46d1-89b1-d6de2848cee3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-56zpl" [9e7df1fe-5fc7-46d1-89b1-d6de2848cee3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003526887s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-245719 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-j7pqq" [07fda447-3b57-4b4a-bed8-e61e2b99a064] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003735481s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (76.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m16.120565927s)
--- PASS: TestNetworkPlugins/group/calico/Start (76.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-245719 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-245719 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-wnn5q" [0d408bd8-3543-4ee6-b09a-0aade3815a81] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0729 11:29:04.805893 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-wnn5q" [0d408bd8-3543-4ee6-b09a-0aade3815a81] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004040601s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-245719 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.82s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0729 11:29:46.117766 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m7.819991295s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-7bq7m" [5af159ca-ea35-46ad-a8db-b93fd8e22002] Running
E0729 11:30:13.801640 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/no-preload-707151/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005219702s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-245719 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-245719 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-mx7bp" [2ceb45a2-3ee8-4218-9113-7dd6e457f9df] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-mx7bp" [2ceb45a2-3ee8-4218-9113-7dd6e457f9df] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006657535s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-245719 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-245719 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-245719 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-lq65x" [b62c1969-fce7-49d1-8fcc-45ec3de448e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-lq65x" [b62c1969-fce7-49d1-8fcc-45ec3de448e2] Running
E0729 11:30:50.311330 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/functional-788372/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004277046s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (59.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (59.239670195s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (59.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-245719 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (66.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0729 11:31:30.708844 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/addons-299185/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m6.370623258s)
--- PASS: TestNetworkPlugins/group/flannel/Start (66.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-245719 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-245719 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9m522" [fcd6b8e2-4595-48a1-9892-cc8a01fb046a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0729 11:31:56.584754 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:31:56.589996 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:31:56.600288 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:31:56.620478 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:31:56.660731 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:31:56.741027 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:31:56.901373 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:31:57.222500 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-9m522" [fcd6b8e2-4595-48a1-9892-cc8a01fb046a] Running
E0729 11:31:57.863091 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:31:59.143704 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
E0729 11:32:01.704788 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.003666032s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-245719 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (50.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-245719 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (50.024984093s)
--- PASS: TestNetworkPlugins/group/bridge/Start (50.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vgl9v" [db1e3216-dc8e-43b9-8455-7901d3793e33] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00867657s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-245719 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-245719 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-24t2b" [6184ef62-1019-4c88-92e8-6c81d99c0ab2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0729 11:32:37.546590 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-24t2b" [6184ef62-1019-4c88-92e8-6c81d99c0ab2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.011736711s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-245719 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-245719 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-245719 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-rsqgn" [a8b57472-77fb-43f1-afa3-90147034e451] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0729 11:33:18.507174 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/default-k8s-diff-port-187311/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-rsqgn" [a8b57472-77fb-43f1-afa3-90147034e451] Running
E0729 11:33:22.687107 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:22.692348 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:22.702593 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:22.722836 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:22.763086 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:22.843358 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:23.005035 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:23.325627 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:23.965854 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.005109184s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (33.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-245719 exec deployment/netcat -- nslookup kubernetes.default
E0729 11:33:25.246696 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:27.807609 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:32.928340 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:37.122930 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/old-k8s-version-398652/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-245719 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.173720097s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-245719 exec deployment/netcat -- nslookup kubernetes.default
E0729 11:33:43.169449 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/auto-245719/client.crt: no such file or directory
E0729 11:33:52.973184 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:52.978501 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:52.988790 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:53.009130 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:53.049391 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:53.129748 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:53.290141 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:53.610680 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:54.251688 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
E0729 11:33:55.531931 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-245719 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.176473911s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:175: (dbg) Run:  kubectl --context bridge-245719 exec deployment/netcat -- nslookup kubernetes.default
E0729 11:33:58.092176 2909789 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/19337-2904404/.minikube/profiles/kindnet-245719/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/DNS (33.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-245719 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (31/336)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-578308 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-578308" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-578308
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1783: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-863975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-863975
--- SKIP: TestStartStop/group/disable-driver-mounts (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-245719 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-245719" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-245719

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-245719"

                                                
                                                
----------------------- debugLogs end: kubenet-245719 [took: 4.170368264s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-245719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-245719
--- SKIP: TestNetworkPlugins/group/kubenet (4.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-245719 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-245719" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-245719

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-245719" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-245719"

                                                
                                                
----------------------- debugLogs end: cilium-245719 [took: 5.463570513s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-245719" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-245719
--- SKIP: TestNetworkPlugins/group/cilium (5.64s)

                                                
                                    
Copied to clipboard