Test Report: Docker_Linux_containerd_arm64 19643

                    
                      17d31f5d116bbb5d9ac8f4a1c2873ea47cdfa40f:2024-09-14:36211
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.97
302 TestStartStop/group/old-k8s-version/serial/SecondStart 376.23
x
+
TestAddons/serial/Volcano (199.97s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 43.422305ms
addons_test.go:913: volcano-controller stabilized in 43.475786ms
addons_test.go:905: volcano-admission stabilized in 43.50781ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-4mvft" [6a1b8dd3-abcc-43eb-82ca-2584cf24a122] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003915741s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-fcqnn" [d0ae9733-17df-47d1-b004-587a8173ed02] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004077168s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-jkmfm" [0cb20d50-f60b-4390-8381-7ddf68ec086c] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003448303s
addons_test.go:932: (dbg) Run:  kubectl --context addons-478069 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-478069 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-478069 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [3bce6481-db8c-4783-a51e-e4547470416c] Pending
helpers_test.go:344: "test-job-nginx-0" [3bce6481-db8c-4783-a51e-e4547470416c] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-478069 -n addons-478069
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-14 17:43:34.782570467 +0000 UTC m=+431.355119627
addons_test.go:964: (dbg) Run:  kubectl --context addons-478069 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-478069 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-2b315aca-09d1-44db-8312-8772c9499b67
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mhxfw (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-mhxfw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m58s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-478069 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-478069 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-478069
helpers_test.go:235: (dbg) docker inspect addons-478069:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ed8e76cc3c7d183d06fecead10ac888b4c1e26c69cf90eb908c0ffce13ce0dd9",
	        "Created": "2024-09-14T17:37:06.163556862Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 299493,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T17:37:06.298185215Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:86ef0f8f97fae81f88ea7ff0848cf3d848f7964ac99ca9c948802eb432bfd351",
	        "ResolvConfPath": "/var/lib/docker/containers/ed8e76cc3c7d183d06fecead10ac888b4c1e26c69cf90eb908c0ffce13ce0dd9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ed8e76cc3c7d183d06fecead10ac888b4c1e26c69cf90eb908c0ffce13ce0dd9/hostname",
	        "HostsPath": "/var/lib/docker/containers/ed8e76cc3c7d183d06fecead10ac888b4c1e26c69cf90eb908c0ffce13ce0dd9/hosts",
	        "LogPath": "/var/lib/docker/containers/ed8e76cc3c7d183d06fecead10ac888b4c1e26c69cf90eb908c0ffce13ce0dd9/ed8e76cc3c7d183d06fecead10ac888b4c1e26c69cf90eb908c0ffce13ce0dd9-json.log",
	        "Name": "/addons-478069",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-478069:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-478069",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0570847368b3b5ad9aab6a93e9249d44893ae8210c87e6e1fc7f9521f677caba-init/diff:/var/lib/docker/overlay2/bf50794440da861115e50c5b2a7303272c8b338b643d76ff54196910083f51c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0570847368b3b5ad9aab6a93e9249d44893ae8210c87e6e1fc7f9521f677caba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0570847368b3b5ad9aab6a93e9249d44893ae8210c87e6e1fc7f9521f677caba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0570847368b3b5ad9aab6a93e9249d44893ae8210c87e6e1fc7f9521f677caba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-478069",
	                "Source": "/var/lib/docker/volumes/addons-478069/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-478069",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-478069",
	                "name.minikube.sigs.k8s.io": "addons-478069",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2b696cd2f5489cf35380a05f55026909e5b85f82a135b1e797ae1a66abcf0e5f",
	            "SandboxKey": "/var/run/docker/netns/2b696cd2f548",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-478069": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "9b7424a4f5f768493aa32a50c577ceb2146b191f02559d532a39b6f35b593416",
	                    "EndpointID": "b2716b5ee22b90a06cc8e5628f8200d8e225c0839af8a839360fcdcc6fe32dfe",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-478069",
	                        "ed8e76cc3c7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-478069 -n addons-478069
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-478069 logs -n 25: (1.638100493s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-078725   | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	|         | -p download-only-078725              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| delete  | -p download-only-078725              | download-only-078725   | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| start   | -o=json --download-only              | download-only-574797   | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	|         | -p download-only-574797              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| delete  | -p download-only-574797              | download-only-574797   | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| delete  | -p download-only-078725              | download-only-078725   | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| delete  | -p download-only-574797              | download-only-574797   | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| start   | --download-only -p                   | download-docker-877234 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	|         | download-docker-877234               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-877234            | download-docker-877234 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| start   | --download-only -p                   | binary-mirror-941961   | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	|         | binary-mirror-941961                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:39039               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-941961              | binary-mirror-941961   | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| addons  | disable dashboard -p                 | addons-478069          | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	|         | addons-478069                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-478069          | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	|         | addons-478069                        |                        |         |         |                     |                     |
	| start   | -p addons-478069 --wait=true         | addons-478069          | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:40 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:36:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:36:40.668365  299001 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:36:40.668522  299001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:36:40.668553  299001 out.go:358] Setting ErrFile to fd 2...
	I0914 17:36:40.668559  299001 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:36:40.668842  299001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 17:36:40.669339  299001 out.go:352] Setting JSON to false
	I0914 17:36:40.670252  299001 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4753,"bootTime":1726330648,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 17:36:40.670329  299001 start.go:139] virtualization:  
	I0914 17:36:40.672731  299001 out.go:177] * [addons-478069] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 17:36:40.674767  299001 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:36:40.674907  299001 notify.go:220] Checking for updates...
	I0914 17:36:40.678601  299001 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:36:40.680394  299001 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 17:36:40.682225  299001 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	I0914 17:36:40.683733  299001 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 17:36:40.685380  299001 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:36:40.687407  299001 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:36:40.717264  299001 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 17:36:40.717417  299001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:36:40.770279  299001 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 17:36:40.760663548 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:36:40.770406  299001 docker.go:318] overlay module found
	I0914 17:36:40.772812  299001 out.go:177] * Using the docker driver based on user configuration
	I0914 17:36:40.774769  299001 start.go:297] selected driver: docker
	I0914 17:36:40.774795  299001 start.go:901] validating driver "docker" against <nil>
	I0914 17:36:40.774809  299001 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:36:40.775472  299001 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:36:40.825216  299001 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-14 17:36:40.815930277 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:36:40.825438  299001 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 17:36:40.825676  299001 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:36:40.827537  299001 out.go:177] * Using Docker driver with root privileges
	I0914 17:36:40.829014  299001 cni.go:84] Creating CNI manager for ""
	I0914 17:36:40.829086  299001 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 17:36:40.829102  299001 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 17:36:40.829187  299001 start.go:340] cluster config:
	{Name:addons-478069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-478069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:36:40.832108  299001 out.go:177] * Starting "addons-478069" primary control-plane node in "addons-478069" cluster
	I0914 17:36:40.834099  299001 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 17:36:40.835682  299001 out.go:177] * Pulling base image v0.0.45-1726281268-19643 ...
	I0914 17:36:40.837247  299001 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 17:36:40.837307  299001 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0914 17:36:40.837319  299001 cache.go:56] Caching tarball of preloaded images
	I0914 17:36:40.837344  299001 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local docker daemon
	I0914 17:36:40.837412  299001 preload.go:172] Found /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 17:36:40.837423  299001 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0914 17:36:40.837783  299001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/config.json ...
	I0914 17:36:40.837850  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/config.json: {Name:mk840857e552a068b9fd1b76590de4bec2e8dd2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:36:40.852405  299001 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 17:36:40.852534  299001 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory
	I0914 17:36:40.852560  299001 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory, skipping pull
	I0914 17:36:40.852566  299001 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e exists in cache, skipping pull
	I0914 17:36:40.852576  299001 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e as a tarball
	I0914 17:36:40.852586  299001 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e from local cache
	I0914 17:36:58.381571  299001 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e from cached tarball
	I0914 17:36:58.381611  299001 cache.go:194] Successfully downloaded all kic artifacts
	I0914 17:36:58.381641  299001 start.go:360] acquireMachinesLock for addons-478069: {Name:mk8e0b96f2c9b1092aeb5498cfa5b4404de7ccf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 17:36:58.381762  299001 start.go:364] duration metric: took 98.396µs to acquireMachinesLock for "addons-478069"
	I0914 17:36:58.381794  299001 start.go:93] Provisioning new machine with config: &{Name:addons-478069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-478069 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 17:36:58.381889  299001 start.go:125] createHost starting for "" (driver="docker")
	I0914 17:36:58.384115  299001 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0914 17:36:58.384392  299001 start.go:159] libmachine.API.Create for "addons-478069" (driver="docker")
	I0914 17:36:58.384429  299001 client.go:168] LocalClient.Create starting
	I0914 17:36:58.384548  299001 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem
	I0914 17:36:58.743038  299001 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/cert.pem
	I0914 17:36:59.800047  299001 cli_runner.go:164] Run: docker network inspect addons-478069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0914 17:36:59.818164  299001 cli_runner.go:211] docker network inspect addons-478069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0914 17:36:59.818258  299001 network_create.go:284] running [docker network inspect addons-478069] to gather additional debugging logs...
	I0914 17:36:59.818290  299001 cli_runner.go:164] Run: docker network inspect addons-478069
	W0914 17:36:59.834907  299001 cli_runner.go:211] docker network inspect addons-478069 returned with exit code 1
	I0914 17:36:59.834940  299001 network_create.go:287] error running [docker network inspect addons-478069]: docker network inspect addons-478069: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-478069 not found
	I0914 17:36:59.834954  299001 network_create.go:289] output of [docker network inspect addons-478069]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-478069 not found
	
	** /stderr **
	I0914 17:36:59.835056  299001 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 17:36:59.852352  299001 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017fff80}
	I0914 17:36:59.852418  299001 network_create.go:124] attempt to create docker network addons-478069 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0914 17:36:59.852491  299001 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-478069 addons-478069
	I0914 17:36:59.921116  299001 network_create.go:108] docker network addons-478069 192.168.49.0/24 created
	I0914 17:36:59.921153  299001 kic.go:121] calculated static IP "192.168.49.2" for the "addons-478069" container
	I0914 17:36:59.921235  299001 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0914 17:36:59.935782  299001 cli_runner.go:164] Run: docker volume create addons-478069 --label name.minikube.sigs.k8s.io=addons-478069 --label created_by.minikube.sigs.k8s.io=true
	I0914 17:36:59.951995  299001 oci.go:103] Successfully created a docker volume addons-478069
	I0914 17:36:59.952095  299001 cli_runner.go:164] Run: docker run --rm --name addons-478069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-478069 --entrypoint /usr/bin/test -v addons-478069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e -d /var/lib
	I0914 17:37:02.034791  299001 cli_runner.go:217] Completed: docker run --rm --name addons-478069-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-478069 --entrypoint /usr/bin/test -v addons-478069:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e -d /var/lib: (2.082650467s)
	I0914 17:37:02.034832  299001 oci.go:107] Successfully prepared a docker volume addons-478069
	I0914 17:37:02.034853  299001 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 17:37:02.034873  299001 kic.go:194] Starting extracting preloaded images to volume ...
	I0914 17:37:02.034949  299001 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-478069:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e -I lz4 -xf /preloaded.tar -C /extractDir
	I0914 17:37:06.097704  299001 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-478069:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e -I lz4 -xf /preloaded.tar -C /extractDir: (4.062714924s)
	I0914 17:37:06.097737  299001 kic.go:203] duration metric: took 4.06285949s to extract preloaded images to volume ...
	W0914 17:37:06.097886  299001 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0914 17:37:06.098009  299001 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0914 17:37:06.149113  299001 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-478069 --name addons-478069 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-478069 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-478069 --network addons-478069 --ip 192.168.49.2 --volume addons-478069:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e
	I0914 17:37:06.467268  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Running}}
	I0914 17:37:06.486718  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:06.507069  299001 cli_runner.go:164] Run: docker exec addons-478069 stat /var/lib/dpkg/alternatives/iptables
	I0914 17:37:06.576507  299001 oci.go:144] the created container "addons-478069" has a running status.
	I0914 17:37:06.576537  299001 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa...
	I0914 17:37:07.202437  299001 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0914 17:37:07.233368  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:07.264564  299001 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0914 17:37:07.264584  299001 kic_runner.go:114] Args: [docker exec --privileged addons-478069 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0914 17:37:07.333026  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:07.359007  299001 machine.go:93] provisionDockerMachine start ...
	I0914 17:37:07.359115  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:07.382909  299001 main.go:141] libmachine: Using SSH client type: native
	I0914 17:37:07.383220  299001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0914 17:37:07.383238  299001 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 17:37:07.539863  299001 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-478069
	
	I0914 17:37:07.539886  299001 ubuntu.go:169] provisioning hostname "addons-478069"
	I0914 17:37:07.539953  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:07.562172  299001 main.go:141] libmachine: Using SSH client type: native
	I0914 17:37:07.562411  299001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0914 17:37:07.562424  299001 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-478069 && echo "addons-478069" | sudo tee /etc/hostname
	I0914 17:37:07.719774  299001 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-478069
	
	I0914 17:37:07.719901  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:07.736314  299001 main.go:141] libmachine: Using SSH client type: native
	I0914 17:37:07.736572  299001 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0914 17:37:07.736595  299001 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-478069' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-478069/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-478069' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 17:37:07.875917  299001 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 17:37:07.875943  299001 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19643-292860/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-292860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-292860/.minikube}
	I0914 17:37:07.875962  299001 ubuntu.go:177] setting up certificates
	I0914 17:37:07.875972  299001 provision.go:84] configureAuth start
	I0914 17:37:07.876034  299001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-478069
	I0914 17:37:07.895176  299001 provision.go:143] copyHostCerts
	I0914 17:37:07.895319  299001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-292860/.minikube/ca.pem (1082 bytes)
	I0914 17:37:07.895441  299001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-292860/.minikube/cert.pem (1123 bytes)
	I0914 17:37:07.895498  299001 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-292860/.minikube/key.pem (1675 bytes)
	I0914 17:37:07.895544  299001 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-292860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca-key.pem org=jenkins.addons-478069 san=[127.0.0.1 192.168.49.2 addons-478069 localhost minikube]
	I0914 17:37:08.475473  299001 provision.go:177] copyRemoteCerts
	I0914 17:37:08.475553  299001 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 17:37:08.475611  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:08.493517  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:08.592150  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 17:37:08.615534  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0914 17:37:08.638865  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 17:37:08.662894  299001 provision.go:87] duration metric: took 786.898052ms to configureAuth
	I0914 17:37:08.662921  299001 ubuntu.go:193] setting minikube options for container-runtime
	I0914 17:37:08.663104  299001 config.go:182] Loaded profile config "addons-478069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 17:37:08.663118  299001 machine.go:96] duration metric: took 1.304090244s to provisionDockerMachine
	I0914 17:37:08.663126  299001 client.go:171] duration metric: took 10.278687934s to LocalClient.Create
	I0914 17:37:08.663150  299001 start.go:167] duration metric: took 10.278760435s to libmachine.API.Create "addons-478069"
	I0914 17:37:08.663166  299001 start.go:293] postStartSetup for "addons-478069" (driver="docker")
	I0914 17:37:08.663175  299001 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 17:37:08.663232  299001 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 17:37:08.663276  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:08.679673  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:08.778003  299001 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 17:37:08.781496  299001 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 17:37:08.781532  299001 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 17:37:08.781544  299001 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 17:37:08.781551  299001 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 17:37:08.781562  299001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-292860/.minikube/addons for local assets ...
	I0914 17:37:08.781630  299001 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-292860/.minikube/files for local assets ...
	I0914 17:37:08.781651  299001 start.go:296] duration metric: took 118.479262ms for postStartSetup
	I0914 17:37:08.781958  299001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-478069
	I0914 17:37:08.802113  299001 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/config.json ...
	I0914 17:37:08.802412  299001 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:37:08.802460  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:08.818995  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:08.913196  299001 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 17:37:08.918175  299001 start.go:128] duration metric: took 10.536268922s to createHost
	I0914 17:37:08.918205  299001 start.go:83] releasing machines lock for "addons-478069", held for 10.536427756s
	I0914 17:37:08.918283  299001 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-478069
	I0914 17:37:08.934286  299001 ssh_runner.go:195] Run: cat /version.json
	I0914 17:37:08.934351  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:08.934629  299001 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 17:37:08.934685  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:08.954870  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:08.963747  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:09.047482  299001 ssh_runner.go:195] Run: systemctl --version
	I0914 17:37:09.174704  299001 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 17:37:09.178976  299001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 17:37:09.204839  299001 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 17:37:09.204956  299001 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 17:37:09.232928  299001 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0914 17:37:09.232951  299001 start.go:495] detecting cgroup driver to use...
	I0914 17:37:09.232984  299001 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 17:37:09.233033  299001 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 17:37:09.245632  299001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 17:37:09.256700  299001 docker.go:217] disabling cri-docker service (if available) ...
	I0914 17:37:09.256765  299001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 17:37:09.270765  299001 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 17:37:09.285039  299001 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 17:37:09.379359  299001 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 17:37:09.470702  299001 docker.go:233] disabling docker service ...
	I0914 17:37:09.470784  299001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 17:37:09.490344  299001 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 17:37:09.502408  299001 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 17:37:09.583393  299001 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 17:37:09.662405  299001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 17:37:09.674146  299001 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 17:37:09.691121  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0914 17:37:09.701395  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 17:37:09.711571  299001 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 17:37:09.711687  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 17:37:09.721722  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 17:37:09.731768  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 17:37:09.741923  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 17:37:09.752143  299001 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 17:37:09.762000  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 17:37:09.772132  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0914 17:37:09.782099  299001 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0914 17:37:09.792042  299001 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 17:37:09.800606  299001 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 17:37:09.808987  299001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:37:09.892177  299001 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 17:37:10.033375  299001 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 17:37:10.033497  299001 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 17:37:10.038019  299001 start.go:563] Will wait 60s for crictl version
	I0914 17:37:10.038143  299001 ssh_runner.go:195] Run: which crictl
	I0914 17:37:10.042355  299001 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 17:37:10.083227  299001 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0914 17:37:10.083396  299001 ssh_runner.go:195] Run: containerd --version
	I0914 17:37:10.107265  299001 ssh_runner.go:195] Run: containerd --version
	I0914 17:37:10.133977  299001 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0914 17:37:10.135744  299001 cli_runner.go:164] Run: docker network inspect addons-478069 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 17:37:10.152186  299001 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0914 17:37:10.156245  299001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:37:10.168428  299001 kubeadm.go:883] updating cluster {Name:addons-478069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-478069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 17:37:10.168562  299001 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 17:37:10.168630  299001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:37:10.206391  299001 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 17:37:10.206418  299001 containerd.go:534] Images already preloaded, skipping extraction
	I0914 17:37:10.206480  299001 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 17:37:10.243210  299001 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 17:37:10.243234  299001 cache_images.go:84] Images are preloaded, skipping loading
	I0914 17:37:10.243242  299001 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0914 17:37:10.243335  299001 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-478069 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-478069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 17:37:10.243422  299001 ssh_runner.go:195] Run: sudo crictl info
	I0914 17:37:10.282583  299001 cni.go:84] Creating CNI manager for ""
	I0914 17:37:10.282608  299001 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 17:37:10.282619  299001 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 17:37:10.282642  299001 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-478069 NodeName:addons-478069 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0914 17:37:10.282781  299001 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-478069"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 17:37:10.282851  299001 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0914 17:37:10.291691  299001 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 17:37:10.291760  299001 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 17:37:10.300291  299001 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0914 17:37:10.317384  299001 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 17:37:10.335889  299001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0914 17:37:10.354455  299001 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0914 17:37:10.357866  299001 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 17:37:10.368556  299001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:37:10.454002  299001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:37:10.470789  299001 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069 for IP: 192.168.49.2
	I0914 17:37:10.470812  299001 certs.go:194] generating shared ca certs ...
	I0914 17:37:10.470828  299001 certs.go:226] acquiring lock for ca certs: {Name:mkf21090b38f44552475e7c85ae32e95553c36bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:10.470963  299001 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-292860/.minikube/ca.key
	I0914 17:37:10.973777  299001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-292860/.minikube/ca.crt ...
	I0914 17:37:10.973815  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/ca.crt: {Name:mk1449f51b33879608b070e037884ff5e0ebe11e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:10.974865  299001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-292860/.minikube/ca.key ...
	I0914 17:37:10.974883  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/ca.key: {Name:mk78bc7c6d76aef14497026104e20eea502af5bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:10.974979  299001 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.key
	I0914 17:37:11.423218  299001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.crt ...
	I0914 17:37:11.423251  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.crt: {Name:mk2b6fb15d6d6c4b404731320caaa17027c6c7d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:11.423437  299001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.key ...
	I0914 17:37:11.423450  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.key: {Name:mk8d81be1193a345a85b6f5690c21a2feb278202 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:11.423971  299001 certs.go:256] generating profile certs ...
	I0914 17:37:11.424045  299001 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.key
	I0914 17:37:11.424062  299001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt with IP's: []
	I0914 17:37:11.679603  299001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt ...
	I0914 17:37:11.679637  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: {Name:mke4aee28c1cf3ba20c56e19917261d4877f19e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:11.679831  299001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.key ...
	I0914 17:37:11.679846  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.key: {Name:mkb3d17fc6ce64b6376c6e1daba7e7e1f6145d02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:11.680299  299001 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.key.e390d6f9
	I0914 17:37:11.680324  299001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.crt.e390d6f9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0914 17:37:12.458085  299001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.crt.e390d6f9 ...
	I0914 17:37:12.458118  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.crt.e390d6f9: {Name:mkce5d8baf225c5a4224c4644809d5999364983a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:12.458313  299001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.key.e390d6f9 ...
	I0914 17:37:12.458328  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.key.e390d6f9: {Name:mkf4826b29c9624d60668601d4715a7a19a480c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:12.458418  299001 certs.go:381] copying /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.crt.e390d6f9 -> /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.crt
	I0914 17:37:12.458508  299001 certs.go:385] copying /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.key.e390d6f9 -> /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.key
	I0914 17:37:12.458566  299001 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/proxy-client.key
	I0914 17:37:12.458588  299001 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/proxy-client.crt with IP's: []
	I0914 17:37:13.114473  299001 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/proxy-client.crt ...
	I0914 17:37:13.114504  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/proxy-client.crt: {Name:mkeba0a0a399c6fe54368f7079a0a321c896aac3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:13.114698  299001 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/proxy-client.key ...
	I0914 17:37:13.114713  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/proxy-client.key: {Name:mkcd421c4ffaf705a061af3fe01d942844368092 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:13.114922  299001 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 17:37:13.114964  299001 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem (1082 bytes)
	I0914 17:37:13.114995  299001 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/cert.pem (1123 bytes)
	I0914 17:37:13.115023  299001 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/key.pem (1675 bytes)
	I0914 17:37:13.116035  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 17:37:13.147286  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 17:37:13.175182  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 17:37:13.199190  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 17:37:13.224271  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0914 17:37:13.248935  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0914 17:37:13.273483  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 17:37:13.298028  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 17:37:13.322732  299001 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 17:37:13.347811  299001 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 17:37:13.366554  299001 ssh_runner.go:195] Run: openssl version
	I0914 17:37:13.372240  299001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 17:37:13.383169  299001 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:37:13.387425  299001 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:37:13.387519  299001 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 17:37:13.394932  299001 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 17:37:13.405663  299001 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 17:37:13.410377  299001 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0914 17:37:13.410458  299001 kubeadm.go:392] StartCluster: {Name:addons-478069 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-478069 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:37:13.410563  299001 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 17:37:13.410656  299001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 17:37:13.456641  299001 cri.go:89] found id: ""
	I0914 17:37:13.456740  299001 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 17:37:13.465944  299001 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0914 17:37:13.475193  299001 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0914 17:37:13.475284  299001 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0914 17:37:13.484578  299001 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0914 17:37:13.484619  299001 kubeadm.go:157] found existing configuration files:
	
	I0914 17:37:13.484678  299001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0914 17:37:13.493814  299001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0914 17:37:13.493878  299001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0914 17:37:13.502755  299001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0914 17:37:13.511942  299001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0914 17:37:13.512086  299001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0914 17:37:13.520990  299001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0914 17:37:13.529979  299001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0914 17:37:13.530048  299001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0914 17:37:13.538802  299001 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0914 17:37:13.547493  299001 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0914 17:37:13.547581  299001 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0914 17:37:13.556229  299001 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0914 17:37:13.596908  299001 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0914 17:37:13.597007  299001 kubeadm.go:310] [preflight] Running pre-flight checks
	I0914 17:37:13.618059  299001 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0914 17:37:13.618138  299001 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0914 17:37:13.618180  299001 kubeadm.go:310] OS: Linux
	I0914 17:37:13.618230  299001 kubeadm.go:310] CGROUPS_CPU: enabled
	I0914 17:37:13.618283  299001 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0914 17:37:13.618334  299001 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0914 17:37:13.618385  299001 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0914 17:37:13.618436  299001 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0914 17:37:13.618494  299001 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0914 17:37:13.618544  299001 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0914 17:37:13.618598  299001 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0914 17:37:13.618648  299001 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0914 17:37:13.676646  299001 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0914 17:37:13.676759  299001 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0914 17:37:13.676853  299001 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0914 17:37:13.683994  299001 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0914 17:37:13.687170  299001 out.go:235]   - Generating certificates and keys ...
	I0914 17:37:13.687373  299001 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0914 17:37:13.687474  299001 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0914 17:37:13.950324  299001 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0914 17:37:14.342991  299001 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0914 17:37:14.648711  299001 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0914 17:37:15.611571  299001 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0914 17:37:16.192491  299001 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0914 17:37:16.192665  299001 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-478069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 17:37:16.645774  299001 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0914 17:37:16.646062  299001 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-478069 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0914 17:37:16.826844  299001 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0914 17:37:17.160493  299001 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0914 17:37:17.383392  299001 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0914 17:37:17.383755  299001 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0914 17:37:17.651143  299001 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0914 17:37:18.533508  299001 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0914 17:37:18.719685  299001 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0914 17:37:19.358881  299001 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0914 17:37:20.049545  299001 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0914 17:37:20.050432  299001 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0914 17:37:20.055642  299001 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0914 17:37:20.058136  299001 out.go:235]   - Booting up control plane ...
	I0914 17:37:20.058246  299001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0914 17:37:20.058322  299001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0914 17:37:20.059049  299001 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0914 17:37:20.088660  299001 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0914 17:37:20.095742  299001 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0914 17:37:20.095801  299001 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0914 17:37:20.200154  299001 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0914 17:37:20.200284  299001 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0914 17:37:21.200461  299001 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001955534s
	I0914 17:37:21.200558  299001 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0914 17:37:27.202249  299001 kubeadm.go:310] [api-check] The API server is healthy after 6.002063619s
	I0914 17:37:27.222002  299001 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0914 17:37:27.241880  299001 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0914 17:37:27.266258  299001 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0914 17:37:27.266454  299001 kubeadm.go:310] [mark-control-plane] Marking the node addons-478069 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0914 17:37:27.277130  299001 kubeadm.go:310] [bootstrap-token] Using token: f1w2qq.0tev7hhiy9xwpuyx
	I0914 17:37:27.278977  299001 out.go:235]   - Configuring RBAC rules ...
	I0914 17:37:27.279095  299001 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0914 17:37:27.284080  299001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0914 17:37:27.291508  299001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0914 17:37:27.295300  299001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0914 17:37:27.300477  299001 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0914 17:37:27.304932  299001 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0914 17:37:27.608930  299001 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0914 17:37:28.043907  299001 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0914 17:37:28.611125  299001 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0914 17:37:28.612293  299001 kubeadm.go:310] 
	I0914 17:37:28.612373  299001 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0914 17:37:28.612386  299001 kubeadm.go:310] 
	I0914 17:37:28.612511  299001 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0914 17:37:28.612520  299001 kubeadm.go:310] 
	I0914 17:37:28.612563  299001 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0914 17:37:28.612635  299001 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0914 17:37:28.612704  299001 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0914 17:37:28.612712  299001 kubeadm.go:310] 
	I0914 17:37:28.612775  299001 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0914 17:37:28.612781  299001 kubeadm.go:310] 
	I0914 17:37:28.612829  299001 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0914 17:37:28.612833  299001 kubeadm.go:310] 
	I0914 17:37:28.612894  299001 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0914 17:37:28.612977  299001 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0914 17:37:28.613050  299001 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0914 17:37:28.613058  299001 kubeadm.go:310] 
	I0914 17:37:28.613141  299001 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0914 17:37:28.613222  299001 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0914 17:37:28.613231  299001 kubeadm.go:310] 
	I0914 17:37:28.613315  299001 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token f1w2qq.0tev7hhiy9xwpuyx \
	I0914 17:37:28.613421  299001 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:15d64a7e7cc5fc4dd8f4ac847fe2a40979377325c46fe801f6e1942c93973f3f \
	I0914 17:37:28.613446  299001 kubeadm.go:310] 	--control-plane 
	I0914 17:37:28.613456  299001 kubeadm.go:310] 
	I0914 17:37:28.613540  299001 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0914 17:37:28.613547  299001 kubeadm.go:310] 
	I0914 17:37:28.613628  299001 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token f1w2qq.0tev7hhiy9xwpuyx \
	I0914 17:37:28.613734  299001 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:15d64a7e7cc5fc4dd8f4ac847fe2a40979377325c46fe801f6e1942c93973f3f 
	I0914 17:37:28.617212  299001 kubeadm.go:310] W0914 17:37:13.593101    1034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 17:37:28.617559  299001 kubeadm.go:310] W0914 17:37:13.594168    1034 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0914 17:37:28.617781  299001 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0914 17:37:28.617891  299001 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0914 17:37:28.617918  299001 cni.go:84] Creating CNI manager for ""
	I0914 17:37:28.617929  299001 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 17:37:28.620847  299001 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0914 17:37:28.622739  299001 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0914 17:37:28.626603  299001 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0914 17:37:28.626625  299001 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0914 17:37:28.651017  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0914 17:37:28.972990  299001 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0914 17:37:28.973123  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:28.973204  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-478069 minikube.k8s.io/updated_at=2024_09_14T17_37_28_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a minikube.k8s.io/name=addons-478069 minikube.k8s.io/primary=true
	I0914 17:37:29.185957  299001 ops.go:34] apiserver oom_adj: -16
	I0914 17:37:29.186119  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:29.686983  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:30.186156  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:30.686687  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:31.186286  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:31.686193  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:32.187110  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:32.686937  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:33.186324  299001 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0914 17:37:33.334727  299001 kubeadm.go:1113] duration metric: took 4.361649888s to wait for elevateKubeSystemPrivileges
	I0914 17:37:33.334759  299001 kubeadm.go:394] duration metric: took 19.924306238s to StartCluster
	I0914 17:37:33.334775  299001 settings.go:142] acquiring lock: {Name:mk211baf85a5d12c53e1bc3687f6aa07604e6004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:33.335474  299001 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 17:37:33.335943  299001 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/kubeconfig: {Name:mke326c789f0dca4467afe86488dc47fc7003eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:37:33.336162  299001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0914 17:37:33.336181  299001 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 17:37:33.336448  299001 config.go:182] Loaded profile config "addons-478069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 17:37:33.336498  299001 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0914 17:37:33.336576  299001 addons.go:69] Setting yakd=true in profile "addons-478069"
	I0914 17:37:33.336590  299001 addons.go:234] Setting addon yakd=true in "addons-478069"
	I0914 17:37:33.336612  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.337078  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.337564  299001 addons.go:69] Setting cloud-spanner=true in profile "addons-478069"
	I0914 17:37:33.337570  299001 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-478069"
	I0914 17:37:33.337583  299001 addons.go:234] Setting addon cloud-spanner=true in "addons-478069"
	I0914 17:37:33.337588  299001 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-478069"
	I0914 17:37:33.337607  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.337611  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.338022  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.338079  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.341681  299001 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-478069"
	I0914 17:37:33.341747  299001 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-478069"
	I0914 17:37:33.341785  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.342258  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.344644  299001 out.go:177] * Verifying Kubernetes components...
	I0914 17:37:33.344813  299001 addons.go:69] Setting registry=true in profile "addons-478069"
	I0914 17:37:33.350649  299001 addons.go:234] Setting addon registry=true in "addons-478069"
	I0914 17:37:33.350694  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.351176  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.352212  299001 addons.go:69] Setting default-storageclass=true in profile "addons-478069"
	I0914 17:37:33.352288  299001 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-478069"
	I0914 17:37:33.352681  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.344829  299001 addons.go:69] Setting storage-provisioner=true in profile "addons-478069"
	I0914 17:37:33.364856  299001 addons.go:234] Setting addon storage-provisioner=true in "addons-478069"
	I0914 17:37:33.364898  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.365501  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.376365  299001 addons.go:69] Setting gcp-auth=true in profile "addons-478069"
	I0914 17:37:33.376461  299001 mustload.go:65] Loading cluster: addons-478069
	I0914 17:37:33.376699  299001 config.go:182] Loaded profile config "addons-478069": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 17:37:33.377010  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.344836  299001 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-478069"
	I0914 17:37:33.377462  299001 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-478069"
	I0914 17:37:33.377756  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.344843  299001 addons.go:69] Setting volcano=true in profile "addons-478069"
	I0914 17:37:33.391058  299001 addons.go:234] Setting addon volcano=true in "addons-478069"
	I0914 17:37:33.391106  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.344848  299001 addons.go:69] Setting volumesnapshots=true in profile "addons-478069"
	I0914 17:37:33.391854  299001 addons.go:234] Setting addon volumesnapshots=true in "addons-478069"
	I0914 17:37:33.391889  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.392304  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.392881  299001 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 17:37:33.416380  299001 addons.go:69] Setting ingress=true in profile "addons-478069"
	I0914 17:37:33.416468  299001 addons.go:234] Setting addon ingress=true in "addons-478069"
	I0914 17:37:33.416569  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.417188  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.391831  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.435854  299001 addons.go:69] Setting ingress-dns=true in profile "addons-478069"
	I0914 17:37:33.435914  299001 addons.go:234] Setting addon ingress-dns=true in "addons-478069"
	I0914 17:37:33.435989  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.436584  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.467699  299001 addons.go:69] Setting inspektor-gadget=true in profile "addons-478069"
	I0914 17:37:33.467772  299001 addons.go:234] Setting addon inspektor-gadget=true in "addons-478069"
	I0914 17:37:33.467845  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.468373  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.487986  299001 addons.go:69] Setting metrics-server=true in profile "addons-478069"
	I0914 17:37:33.488025  299001 addons.go:234] Setting addon metrics-server=true in "addons-478069"
	I0914 17:37:33.488063  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.488570  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.509673  299001 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0914 17:37:33.522162  299001 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0914 17:37:33.522233  299001 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0914 17:37:33.522335  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.536248  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0914 17:37:33.537312  299001 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0914 17:37:33.554834  299001 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0914 17:37:33.560804  299001 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 17:37:33.577240  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0914 17:37:33.577388  299001 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0914 17:37:33.551375  299001 addons.go:234] Setting addon default-storageclass=true in "addons-478069"
	I0914 17:37:33.580037  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.580528  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.580791  299001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 17:37:33.580803  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 17:37:33.580844  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.612048  299001 out.go:177]   - Using image docker.io/registry:2.8.3
	I0914 17:37:33.613936  299001 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0914 17:37:33.613960  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0914 17:37:33.614026  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.628207  299001 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-478069"
	I0914 17:37:33.628307  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.628828  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:33.641287  299001 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 17:37:33.641307  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0914 17:37:33.641376  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.658022  299001 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0914 17:37:33.660391  299001 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0914 17:37:33.663684  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0914 17:37:33.665461  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0914 17:37:33.665573  299001 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0914 17:37:33.667123  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0914 17:37:33.668481  299001 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 17:37:33.668541  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0914 17:37:33.669535  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.690246  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:33.669365  299001 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0914 17:37:33.692668  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0914 17:37:33.692738  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.695544  299001 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0914 17:37:33.699714  299001 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0914 17:37:33.699832  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0914 17:37:33.699890  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0914 17:37:33.705013  299001 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 17:37:33.705036  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0914 17:37:33.705099  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.705237  299001 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0914 17:37:33.705245  299001 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0914 17:37:33.705282  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.741674  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0914 17:37:33.743472  299001 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0914 17:37:33.745944  299001 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0914 17:37:33.745971  299001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0914 17:37:33.746039  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.768014  299001 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0914 17:37:33.768095  299001 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0914 17:37:33.768194  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.786033  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:33.786256  299001 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0914 17:37:33.788288  299001 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 17:37:33.788410  299001 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0914 17:37:33.790664  299001 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 17:37:33.790778  299001 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 17:37:33.790803  299001 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 17:37:33.790894  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.794157  299001 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 17:37:33.794180  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0914 17:37:33.794242  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.836130  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:33.838998  299001 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0914 17:37:33.841679  299001 out.go:177]   - Using image docker.io/busybox:stable
	I0914 17:37:33.844582  299001 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 17:37:33.844605  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0914 17:37:33.844667  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.851841  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:33.889113  299001 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 17:37:33.889142  299001 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 17:37:33.889206  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:33.902267  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:33.923705  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:33.955266  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:33.956448  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:33.967443  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:33.986344  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:34.008403  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:34.022491  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:34.044348  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:34.050083  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:34.050520  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	W0914 17:37:34.060843  299001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0914 17:37:34.060875  299001 retry.go:31] will retry after 276.713473ms: ssh: handshake failed: EOF
	W0914 17:37:34.341497  299001 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0914 17:37:34.341539  299001 retry.go:31] will retry after 294.509115ms: ssh: handshake failed: EOF
	I0914 17:37:34.420072  299001 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0914 17:37:34.420152  299001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0914 17:37:34.556715  299001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 17:37:34.556781  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0914 17:37:34.579180  299001 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.242991151s)
	I0914 17:37:34.579402  299001 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0914 17:37:34.579509  299001 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.18659812s)
	I0914 17:37:34.579580  299001 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 17:37:34.670412  299001 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0914 17:37:34.670496  299001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0914 17:37:34.671194  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0914 17:37:34.674694  299001 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0914 17:37:34.674749  299001 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0914 17:37:34.676758  299001 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0914 17:37:34.676812  299001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0914 17:37:34.775936  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0914 17:37:34.779933  299001 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0914 17:37:34.780001  299001 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0914 17:37:34.780887  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 17:37:34.791937  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0914 17:37:34.796176  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0914 17:37:34.799753  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0914 17:37:34.832274  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 17:37:34.846564  299001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 17:37:34.846634  299001 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 17:37:34.850887  299001 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0914 17:37:34.850958  299001 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0914 17:37:34.984691  299001 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0914 17:37:34.984712  299001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0914 17:37:34.995465  299001 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0914 17:37:34.995488  299001 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0914 17:37:35.019931  299001 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0914 17:37:35.019958  299001 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0914 17:37:35.135756  299001 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0914 17:37:35.135823  299001 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0914 17:37:35.208444  299001 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 17:37:35.208511  299001 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 17:37:35.210694  299001 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0914 17:37:35.210755  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0914 17:37:35.218246  299001 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0914 17:37:35.218311  299001 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0914 17:37:35.223711  299001 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0914 17:37:35.223776  299001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0914 17:37:35.299751  299001 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0914 17:37:35.299824  299001 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0914 17:37:35.320002  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0914 17:37:35.335790  299001 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0914 17:37:35.335856  299001 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0914 17:37:35.414693  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 17:37:35.428276  299001 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 17:37:35.428363  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0914 17:37:35.431870  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0914 17:37:35.438466  299001 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0914 17:37:35.438541  299001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0914 17:37:35.452996  299001 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0914 17:37:35.453082  299001 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0914 17:37:35.530506  299001 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0914 17:37:35.530578  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0914 17:37:35.682274  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 17:37:35.719916  299001 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0914 17:37:35.719991  299001 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0914 17:37:35.727293  299001 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0914 17:37:35.727371  299001 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0914 17:37:35.882343  299001 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0914 17:37:35.882440  299001 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0914 17:37:35.909497  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0914 17:37:36.028090  299001 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0914 17:37:36.028160  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0914 17:37:36.111120  299001 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 17:37:36.111202  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0914 17:37:36.396211  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0914 17:37:36.413520  299001 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0914 17:37:36.413590  299001 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0914 17:37:36.650906  299001 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0914 17:37:36.651010  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0914 17:37:36.768485  299001 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.188843605s)
	I0914 17:37:36.768572  299001 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.189134018s)
	I0914 17:37:36.768715  299001 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0914 17:37:36.769545  299001 node_ready.go:35] waiting up to 6m0s for node "addons-478069" to be "Ready" ...
	I0914 17:37:36.773736  299001 node_ready.go:49] node "addons-478069" has status "Ready":"True"
	I0914 17:37:36.773803  299001 node_ready.go:38] duration metric: took 4.200314ms for node "addons-478069" to be "Ready" ...
	I0914 17:37:36.773829  299001 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:37:36.792875  299001 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-cd7q4" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:37.020074  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.34881251s)
	I0914 17:37:37.146094  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.370073026s)
	I0914 17:37:37.308509  299001 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-478069" context rescaled to 1 replicas
	I0914 17:37:37.342614  299001 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-cd7q4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-cd7q4" not found
	I0914 17:37:37.342690  299001 pod_ready.go:82] duration metric: took 549.77057ms for pod "coredns-7c65d6cfc9-cd7q4" in "kube-system" namespace to be "Ready" ...
	E0914 17:37:37.342717  299001 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-cd7q4" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-cd7q4" not found
	I0914 17:37:37.342758  299001 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-vcslc" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:37.399865  299001 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0914 17:37:37.399929  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0914 17:37:37.755475  299001 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 17:37:37.755555  299001 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0914 17:37:38.139722  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0914 17:37:38.576191  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.795239975s)
	I0914 17:37:39.362599  299001 pod_ready.go:103] pod "coredns-7c65d6cfc9-vcslc" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:40.900628  299001 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0914 17:37:40.900778  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:40.931670  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:41.206803  299001 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0914 17:37:41.341380  299001 addons.go:234] Setting addon gcp-auth=true in "addons-478069"
	I0914 17:37:41.341433  299001 host.go:66] Checking if "addons-478069" exists ...
	I0914 17:37:41.341980  299001 cli_runner.go:164] Run: docker container inspect addons-478069 --format={{.State.Status}}
	I0914 17:37:41.372392  299001 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0914 17:37:41.372452  299001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-478069
	I0914 17:37:41.399628  299001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/addons-478069/id_rsa Username:docker}
	I0914 17:37:41.849564  299001 pod_ready.go:103] pod "coredns-7c65d6cfc9-vcslc" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:43.701014  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.908991718s)
	I0914 17:37:43.701260  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.905019416s)
	I0914 17:37:43.701297  299001 addons.go:475] Verifying addon ingress=true in "addons-478069"
	I0914 17:37:43.701622  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.901793688s)
	I0914 17:37:43.701775  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.869415993s)
	I0914 17:37:43.702155  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.382078747s)
	I0914 17:37:43.702511  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.287735352s)
	I0914 17:37:43.702581  299001 addons.go:475] Verifying addon metrics-server=true in "addons-478069"
	I0914 17:37:43.702682  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.270739134s)
	I0914 17:37:43.702726  299001 addons.go:475] Verifying addon registry=true in "addons-478069"
	I0914 17:37:43.703545  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.021181895s)
	W0914 17:37:43.705335  299001 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 17:37:43.705370  299001 retry.go:31] will retry after 193.124491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0914 17:37:43.703632  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.794024839s)
	I0914 17:37:43.703723  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.307437935s)
	I0914 17:37:43.706608  299001 out.go:177] * Verifying ingress addon...
	I0914 17:37:43.709318  299001 out.go:177] * Verifying registry addon...
	I0914 17:37:43.710916  299001 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-478069 service yakd-dashboard -n yakd-dashboard
	
	I0914 17:37:43.712036  299001 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0914 17:37:43.715328  299001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W0914 17:37:43.740986  299001 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0914 17:37:43.766426  299001 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0914 17:37:43.766530  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:43.767363  299001 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0914 17:37:43.767427  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:43.898898  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0914 17:37:43.906736  299001 pod_ready.go:103] pod "coredns-7c65d6cfc9-vcslc" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:44.218780  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:44.220045  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:44.542223  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.40245532s)
	I0914 17:37:44.542379  299001 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-478069"
	I0914 17:37:44.542328  299001 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.169912309s)
	I0914 17:37:44.544386  299001 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0914 17:37:44.544498  299001 out.go:177] * Verifying csi-hostpath-driver addon...
	I0914 17:37:44.546424  299001 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0914 17:37:44.547370  299001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0914 17:37:44.548686  299001 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0914 17:37:44.548737  299001 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0914 17:37:44.557737  299001 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0914 17:37:44.557817  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:44.591739  299001 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0914 17:37:44.591813  299001 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0914 17:37:44.662999  299001 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 17:37:44.663070  299001 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0914 17:37:44.714985  299001 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0914 17:37:44.730995  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:44.731329  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:45.075750  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:45.220025  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:45.222067  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:45.552925  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:45.680596  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.7815613s)
	I0914 17:37:45.719358  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:45.719786  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:45.893871  299001 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.17878592s)
	I0914 17:37:45.897268  299001 addons.go:475] Verifying addon gcp-auth=true in "addons-478069"
	I0914 17:37:45.899529  299001 out.go:177] * Verifying gcp-auth addon...
	I0914 17:37:45.902703  299001 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0914 17:37:45.907676  299001 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 17:37:46.052509  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:46.218243  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:46.220342  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:46.349837  299001 pod_ready.go:103] pod "coredns-7c65d6cfc9-vcslc" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:46.553522  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:46.718494  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:46.721931  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:47.053532  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:47.219983  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:47.222371  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:47.553846  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:47.720684  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:47.723812  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:48.053677  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:48.223264  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:48.223928  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:48.357638  299001 pod_ready.go:103] pod "coredns-7c65d6cfc9-vcslc" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:48.553423  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:48.718267  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:48.719327  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:49.054465  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:49.226910  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:49.228038  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:49.350216  299001 pod_ready.go:93] pod "coredns-7c65d6cfc9-vcslc" in "kube-system" namespace has status "Ready":"True"
	I0914 17:37:49.350288  299001 pod_ready.go:82] duration metric: took 12.007494807s for pod "coredns-7c65d6cfc9-vcslc" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.350315  299001 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-478069" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.356436  299001 pod_ready.go:93] pod "etcd-addons-478069" in "kube-system" namespace has status "Ready":"True"
	I0914 17:37:49.356510  299001 pod_ready.go:82] duration metric: took 6.173145ms for pod "etcd-addons-478069" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.356539  299001 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-478069" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.362283  299001 pod_ready.go:93] pod "kube-apiserver-addons-478069" in "kube-system" namespace has status "Ready":"True"
	I0914 17:37:49.362355  299001 pod_ready.go:82] duration metric: took 5.794413ms for pod "kube-apiserver-addons-478069" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.362382  299001 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-478069" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.371509  299001 pod_ready.go:93] pod "kube-controller-manager-addons-478069" in "kube-system" namespace has status "Ready":"True"
	I0914 17:37:49.371584  299001 pod_ready.go:82] duration metric: took 9.179302ms for pod "kube-controller-manager-addons-478069" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.371633  299001 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rnn2j" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.378675  299001 pod_ready.go:93] pod "kube-proxy-rnn2j" in "kube-system" namespace has status "Ready":"True"
	I0914 17:37:49.378748  299001 pod_ready.go:82] duration metric: took 7.079711ms for pod "kube-proxy-rnn2j" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.378788  299001 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-478069" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.552774  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:49.717840  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:49.719786  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:49.747164  299001 pod_ready.go:93] pod "kube-scheduler-addons-478069" in "kube-system" namespace has status "Ready":"True"
	I0914 17:37:49.747241  299001 pod_ready.go:82] duration metric: took 368.425694ms for pod "kube-scheduler-addons-478069" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:49.747271  299001 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-5mrzf" in "kube-system" namespace to be "Ready" ...
	I0914 17:37:50.053563  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:50.218512  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:50.225070  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:50.553059  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:50.718027  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:50.721193  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:51.053635  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:51.217779  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:51.219775  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:51.553149  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:51.717308  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:51.719444  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:51.754928  299001 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mrzf" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:52.053295  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:52.221134  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:52.224114  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:52.553703  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:52.718065  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:52.719691  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:53.058435  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:53.220594  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:53.223664  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:53.553153  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:53.718976  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:53.722347  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:54.053694  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:54.220454  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:54.254652  299001 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mrzf" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:54.318261  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:54.552569  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:54.717805  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:54.719156  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:55.053261  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:55.218382  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:55.220954  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:55.552341  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:55.720634  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:55.722316  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:56.053903  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:56.219393  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:56.221374  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:56.553282  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:56.717740  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:56.720572  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:56.753912  299001 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mrzf" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:57.053321  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:57.220864  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:57.221865  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:57.553308  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:57.719996  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:57.721765  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:58.056111  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:58.219734  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:58.222480  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:58.552654  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:58.720503  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:58.720589  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:59.053302  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:59.218426  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:59.220122  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:37:59.254074  299001 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-5mrzf" in "kube-system" namespace has status "Ready":"False"
	I0914 17:37:59.554833  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:37:59.717745  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:37:59.719771  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:00.098609  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:00.239905  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:00.247063  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:00.554768  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:00.717762  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:00.721223  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:01.052697  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:01.218813  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:01.221549  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:01.552804  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:01.720008  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:01.720908  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:01.754686  299001 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-5mrzf" in "kube-system" namespace has status "Ready":"True"
	I0914 17:38:01.754714  299001 pod_ready.go:82] duration metric: took 12.007420705s for pod "nvidia-device-plugin-daemonset-5mrzf" in "kube-system" namespace to be "Ready" ...
	I0914 17:38:01.754725  299001 pod_ready.go:39] duration metric: took 24.980870118s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 17:38:01.754787  299001 api_server.go:52] waiting for apiserver process to appear ...
	I0914 17:38:01.754869  299001 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:38:01.770265  299001 api_server.go:72] duration metric: took 28.434055187s to wait for apiserver process to appear ...
	I0914 17:38:01.770292  299001 api_server.go:88] waiting for apiserver healthz status ...
	I0914 17:38:01.770314  299001 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0914 17:38:01.778918  299001 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0914 17:38:01.780212  299001 api_server.go:141] control plane version: v1.31.1
	I0914 17:38:01.780242  299001 api_server.go:131] duration metric: took 9.942041ms to wait for apiserver health ...
	I0914 17:38:01.780251  299001 system_pods.go:43] waiting for kube-system pods to appear ...
	I0914 17:38:01.790092  299001 system_pods.go:59] 18 kube-system pods found
	I0914 17:38:01.790132  299001 system_pods.go:61] "coredns-7c65d6cfc9-vcslc" [2baa2871-6788-4717-812d-0c1fe0be866a] Running
	I0914 17:38:01.790141  299001 system_pods.go:61] "csi-hostpath-attacher-0" [dc864211-9ffb-404c-8b69-4c76566485be] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 17:38:01.790150  299001 system_pods.go:61] "csi-hostpath-resizer-0" [e6b0318b-9c61-4199-9166-fff674ef3cbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 17:38:01.790158  299001 system_pods.go:61] "csi-hostpathplugin-6qjvq" [ddb25972-ab1e-437f-9a24-9dc8edf3a506] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 17:38:01.790163  299001 system_pods.go:61] "etcd-addons-478069" [edd85234-b50b-4947-99dc-8dcc14691779] Running
	I0914 17:38:01.790169  299001 system_pods.go:61] "kindnet-skpwn" [98eae4bb-bea8-4536-acf8-a055405f11e8] Running
	I0914 17:38:01.790173  299001 system_pods.go:61] "kube-apiserver-addons-478069" [af7a34c3-046c-42a4-9f91-373eb901b1f6] Running
	I0914 17:38:01.790177  299001 system_pods.go:61] "kube-controller-manager-addons-478069" [e4ec3e8b-e98e-4d0c-84ed-ebf7b538f10c] Running
	I0914 17:38:01.790182  299001 system_pods.go:61] "kube-ingress-dns-minikube" [1022c423-9ec8-4add-9aa7-2c52c8b4fa9b] Running
	I0914 17:38:01.790186  299001 system_pods.go:61] "kube-proxy-rnn2j" [ef35e57a-6f40-420c-99e3-15c83c814207] Running
	I0914 17:38:01.790191  299001 system_pods.go:61] "kube-scheduler-addons-478069" [ac0a1e3d-ac31-4d68-8588-6505fd024f36] Running
	I0914 17:38:01.790197  299001 system_pods.go:61] "metrics-server-84c5f94fbc-hzswq" [17e40b4b-651d-463b-b551-a297450fe05f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 17:38:01.790203  299001 system_pods.go:61] "nvidia-device-plugin-daemonset-5mrzf" [07826017-ddb7-43a6-9266-8cc86a3a9114] Running
	I0914 17:38:01.790215  299001 system_pods.go:61] "registry-66c9cd494c-z88fn" [5f368de6-9d7a-4369-b81a-e84fc032aa5e] Running
	I0914 17:38:01.790221  299001 system_pods.go:61] "registry-proxy-c8cfs" [81b66c2e-ca87-4896-b408-0013c9df1d76] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 17:38:01.790227  299001 system_pods.go:61] "snapshot-controller-56fcc65765-4sfpd" [cdfe5e32-0a46-41c9-a77f-1de2fcb0ed44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 17:38:01.790237  299001 system_pods.go:61] "snapshot-controller-56fcc65765-c6vkl" [e99f44b6-f02c-4d1b-8e46-ff1296f1f3c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 17:38:01.790241  299001 system_pods.go:61] "storage-provisioner" [17dc3e36-c0c4-4a5b-a16c-11163a77284d] Running
	I0914 17:38:01.790248  299001 system_pods.go:74] duration metric: took 9.990173ms to wait for pod list to return data ...
	I0914 17:38:01.790260  299001 default_sa.go:34] waiting for default service account to be created ...
	I0914 17:38:01.792965  299001 default_sa.go:45] found service account: "default"
	I0914 17:38:01.792997  299001 default_sa.go:55] duration metric: took 2.724788ms for default service account to be created ...
	I0914 17:38:01.793008  299001 system_pods.go:116] waiting for k8s-apps to be running ...
	I0914 17:38:01.804813  299001 system_pods.go:86] 18 kube-system pods found
	I0914 17:38:01.804860  299001 system_pods.go:89] "coredns-7c65d6cfc9-vcslc" [2baa2871-6788-4717-812d-0c1fe0be866a] Running
	I0914 17:38:01.804874  299001 system_pods.go:89] "csi-hostpath-attacher-0" [dc864211-9ffb-404c-8b69-4c76566485be] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0914 17:38:01.804889  299001 system_pods.go:89] "csi-hostpath-resizer-0" [e6b0318b-9c61-4199-9166-fff674ef3cbf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0914 17:38:01.804903  299001 system_pods.go:89] "csi-hostpathplugin-6qjvq" [ddb25972-ab1e-437f-9a24-9dc8edf3a506] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0914 17:38:01.804916  299001 system_pods.go:89] "etcd-addons-478069" [edd85234-b50b-4947-99dc-8dcc14691779] Running
	I0914 17:38:01.804980  299001 system_pods.go:89] "kindnet-skpwn" [98eae4bb-bea8-4536-acf8-a055405f11e8] Running
	I0914 17:38:01.804987  299001 system_pods.go:89] "kube-apiserver-addons-478069" [af7a34c3-046c-42a4-9f91-373eb901b1f6] Running
	I0914 17:38:01.804995  299001 system_pods.go:89] "kube-controller-manager-addons-478069" [e4ec3e8b-e98e-4d0c-84ed-ebf7b538f10c] Running
	I0914 17:38:01.805001  299001 system_pods.go:89] "kube-ingress-dns-minikube" [1022c423-9ec8-4add-9aa7-2c52c8b4fa9b] Running
	I0914 17:38:01.805013  299001 system_pods.go:89] "kube-proxy-rnn2j" [ef35e57a-6f40-420c-99e3-15c83c814207] Running
	I0914 17:38:01.805022  299001 system_pods.go:89] "kube-scheduler-addons-478069" [ac0a1e3d-ac31-4d68-8588-6505fd024f36] Running
	I0914 17:38:01.805030  299001 system_pods.go:89] "metrics-server-84c5f94fbc-hzswq" [17e40b4b-651d-463b-b551-a297450fe05f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0914 17:38:01.805040  299001 system_pods.go:89] "nvidia-device-plugin-daemonset-5mrzf" [07826017-ddb7-43a6-9266-8cc86a3a9114] Running
	I0914 17:38:01.805050  299001 system_pods.go:89] "registry-66c9cd494c-z88fn" [5f368de6-9d7a-4369-b81a-e84fc032aa5e] Running
	I0914 17:38:01.805067  299001 system_pods.go:89] "registry-proxy-c8cfs" [81b66c2e-ca87-4896-b408-0013c9df1d76] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0914 17:38:01.805143  299001 system_pods.go:89] "snapshot-controller-56fcc65765-4sfpd" [cdfe5e32-0a46-41c9-a77f-1de2fcb0ed44] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 17:38:01.805169  299001 system_pods.go:89] "snapshot-controller-56fcc65765-c6vkl" [e99f44b6-f02c-4d1b-8e46-ff1296f1f3c1] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0914 17:38:01.805178  299001 system_pods.go:89] "storage-provisioner" [17dc3e36-c0c4-4a5b-a16c-11163a77284d] Running
	I0914 17:38:01.805187  299001 system_pods.go:126] duration metric: took 12.171905ms to wait for k8s-apps to be running ...
	I0914 17:38:01.805199  299001 system_svc.go:44] waiting for kubelet service to be running ....
	I0914 17:38:01.805270  299001 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:38:01.825930  299001 system_svc.go:56] duration metric: took 20.719446ms WaitForService to wait for kubelet
	I0914 17:38:01.825976  299001 kubeadm.go:582] duration metric: took 28.489770201s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 17:38:01.825999  299001 node_conditions.go:102] verifying NodePressure condition ...
	I0914 17:38:01.829548  299001 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0914 17:38:01.829588  299001 node_conditions.go:123] node cpu capacity is 2
	I0914 17:38:01.829602  299001 node_conditions.go:105] duration metric: took 3.597156ms to run NodePressure ...
	I0914 17:38:01.829615  299001 start.go:241] waiting for startup goroutines ...
	I0914 17:38:02.053196  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:02.220615  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:02.221518  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:02.569279  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:02.717631  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:02.719738  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:03.053720  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:03.217947  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:03.219733  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:03.552452  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:03.719426  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:03.719690  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:04.053472  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:04.218286  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:04.221441  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:04.554178  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:04.718244  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:04.719790  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:05.051888  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:05.217202  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:05.219022  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:05.552623  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:05.722054  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:05.722740  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:06.052745  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:06.220180  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:06.221714  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:06.553216  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:06.720107  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:06.720804  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:07.053827  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:07.219641  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:07.221629  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:07.552723  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:07.722928  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:07.725073  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:08.054127  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:08.218636  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:08.220619  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:08.555237  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:08.724852  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:08.725465  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:09.053703  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:09.220218  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:09.220925  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:09.572389  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:09.718585  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:09.721882  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:10.052852  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:10.219272  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:10.220395  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:10.553193  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:10.720766  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:10.721744  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:11.053052  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:11.218166  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:11.220250  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:11.552851  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:11.717112  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:11.720018  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:12.052655  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:12.218099  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:12.219818  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:12.555748  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:12.718282  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:12.720584  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:13.054332  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:13.217332  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:13.219562  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:13.551843  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:13.718880  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:13.722649  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:14.053859  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:14.221378  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:14.222392  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:14.552405  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:14.720333  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:14.720674  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:15.055862  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:15.217381  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:15.219575  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0914 17:38:15.553000  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:15.719362  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:15.721120  299001 kapi.go:107] duration metric: took 32.005807975s to wait for kubernetes.io/minikube-addons=registry ...
	I0914 17:38:16.053307  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:16.218999  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:16.612593  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:16.718028  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:17.053348  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:17.218320  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:17.551823  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:17.718585  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:18.052998  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:18.217847  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:18.554243  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:18.721703  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:19.053106  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:19.218082  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:19.567580  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:19.721433  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:20.054010  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:20.217385  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:20.552836  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:20.717924  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:21.053425  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:21.216969  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:21.553233  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:21.720343  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:22.056651  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:22.218094  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:22.552495  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:22.717605  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:23.051989  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:23.217634  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:23.555893  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:23.717259  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:24.052577  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:24.217680  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:24.554787  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:24.718171  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:25.056838  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:25.218448  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:25.554939  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:25.719162  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:26.052142  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:26.218702  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:26.552066  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:26.717175  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:27.053591  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:27.220051  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:27.553538  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:27.717439  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:28.053041  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:28.218071  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:28.556485  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:28.718055  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:29.052793  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:29.218602  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:29.553011  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:29.718726  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:30.102799  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:30.219209  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:30.553168  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:30.719609  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:31.053952  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:31.217896  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:31.552750  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:31.717925  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:32.053609  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:32.222018  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:32.552619  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:32.717965  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:33.053654  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:33.218309  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:33.552454  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:33.717719  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:34.055097  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:34.217060  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:34.552593  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:34.717953  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:35.052298  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:35.217877  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:35.552261  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0914 17:38:35.717919  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:36.052657  299001 kapi.go:107] duration metric: took 51.505280479s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0914 17:38:36.217120  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:36.717633  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:37.218229  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:37.717288  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:38.217436  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:38.717601  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:39.217438  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:39.717219  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:40.217677  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:40.717973  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:41.217127  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:41.717245  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:42.219237  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:42.717902  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:43.217834  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:43.717329  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:44.225133  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:44.718121  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:45.222119  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:45.718097  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:46.217871  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:46.717554  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:47.218687  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:47.718048  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:48.218825  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:48.717153  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:49.218300  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:49.718177  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:50.219733  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:50.718012  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:51.221644  299001 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0914 17:38:51.743427  299001 kapi.go:107] duration metric: took 1m8.031385959s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0914 17:39:08.907159  299001 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0914 17:39:08.907189  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:09.407153  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:09.907167  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:10.406846  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:10.906915  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:11.406227  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:11.907899  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:12.406978  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:12.906623  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:13.406037  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:13.907017  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:14.406764  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:14.906499  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:15.407112  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:15.910144  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:16.407116  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:16.907215  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:17.407382  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:17.905975  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:18.407312  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:18.906364  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:19.405699  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:19.906594  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:20.407500  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:20.906558  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:21.406018  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:21.906732  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:22.406793  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:22.907246  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:23.406032  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:23.906132  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:24.407431  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:24.906541  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:25.406707  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:25.907026  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:26.407438  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:26.906715  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:27.406501  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:27.906337  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:28.406290  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:28.906024  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:29.406387  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:29.906402  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:30.406484  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:30.906876  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:31.406770  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:31.907143  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:32.406881  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:32.906779  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:33.406552  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:33.906616  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:34.406493  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:34.906661  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:35.406629  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:35.906776  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:36.406873  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:36.906642  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:37.406543  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:37.907064  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:38.407232  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:38.907332  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:39.406459  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:39.906335  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:40.406121  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:40.907332  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:41.406507  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:41.906404  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:42.406358  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:42.906369  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:43.407928  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:43.907278  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:44.409311  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:44.906529  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:45.406823  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:45.910425  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:46.406330  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:46.906566  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:47.406602  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:47.906187  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:48.407156  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:48.907195  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:49.406616  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:49.907230  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:50.406054  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:50.906105  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:51.406498  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:51.906545  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:52.406283  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:52.905969  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:53.406472  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:53.906633  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:54.407193  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:54.907416  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:55.406552  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:55.907210  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:56.407137  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:56.906922  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:57.406334  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:57.906039  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:58.406334  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:58.906188  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:59.406295  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:39:59.906243  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:00.437683  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:00.907347  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:01.406435  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:01.906262  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:02.406298  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:02.905988  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:03.406732  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:03.906758  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:04.406591  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:04.906668  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:05.407032  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:05.910287  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:06.407232  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:06.906361  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:07.406789  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:07.906306  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:08.406438  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:08.906205  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:09.406919  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:09.906763  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:10.406708  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:10.906806  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:11.406508  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:11.907778  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:12.406358  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:12.906060  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:13.408204  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:13.906247  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:14.406374  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:14.907338  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:15.405926  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:15.906797  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:16.406860  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:16.906843  299001 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0914 17:40:17.406987  299001 kapi.go:107] duration metric: took 2m31.504280752s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0914 17:40:17.408736  299001 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-478069 cluster.
	I0914 17:40:17.410465  299001 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0914 17:40:17.412255  299001 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0914 17:40:17.414310  299001 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, storage-provisioner, volcano, nvidia-device-plugin, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0914 17:40:17.416024  299001 addons.go:510] duration metric: took 2m44.079524301s for enable addons: enabled=[cloud-spanner ingress-dns storage-provisioner volcano nvidia-device-plugin metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0914 17:40:17.416136  299001 start.go:246] waiting for cluster config update ...
	I0914 17:40:17.416166  299001 start.go:255] writing updated cluster config ...
	I0914 17:40:17.416488  299001 ssh_runner.go:195] Run: rm -f paused
	I0914 17:40:17.756768  299001 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0914 17:40:17.758800  299001 out.go:177] * Done! kubectl is now configured to use "addons-478069" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	3cf32c0e8215a       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   3b183ba736adf       gadget-5nf8v
	41f8e1ce80fef       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   1c86ee7cff57c       gcp-auth-89d5ffd79-lb4jk
	ef87c30cd92b2       8b46b1cd48760       4 minutes ago       Running             admission                                0                   b984a40e03486       volcano-admission-77d7d48b68-fcqnn
	59fcbd56249a6       289a818c8d9c5       4 minutes ago       Running             controller                               0                   7e4d671ffc242       ingress-nginx-controller-bc57996ff-wb82g
	393e555036fa3       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   bb8d008916a38       csi-hostpathplugin-6qjvq
	e8e91d64c0a45       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   bb8d008916a38       csi-hostpathplugin-6qjvq
	ec54ff50fc544       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   bb8d008916a38       csi-hostpathplugin-6qjvq
	0fb654178d646       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   bb8d008916a38       csi-hostpathplugin-6qjvq
	2d39241c27d3d       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   bb8d008916a38       csi-hostpathplugin-6qjvq
	4d1c2cf2396ac       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   2705d6e7f5968       csi-hostpath-attacher-0
	8a8ad43317a97       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   0845abbd6de47       csi-hostpath-resizer-0
	7cb93fb966035       420193b27261a       5 minutes ago       Exited              patch                                    0                   4f16a51663d7f       ingress-nginx-admission-patch-nknq9
	d63571f6489a6       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   2b0b0ebe6f6da       volcano-scheduler-576bc46687-4mvft
	4a5ac8ee1add9       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   bb8d008916a38       csi-hostpathplugin-6qjvq
	90df0b9d46174       420193b27261a       5 minutes ago       Exited              create                                   0                   f1f048860ad45       ingress-nginx-admission-create-2c9fn
	553418e13c2b5       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   8db410f8c30f1       volcano-controllers-56675bb4d5-jkmfm
	efe8f8ba91f7b       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   d65383de5ba01       registry-proxy-c8cfs
	7fc75fca7e813       77bdba588b953       5 minutes ago       Running             yakd                                     0                   3263484e63df0       yakd-dashboard-67d98fc6b-547j8
	af3b74f307be4       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   189fe7b3992af       snapshot-controller-56fcc65765-4sfpd
	966abc687c814       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   bb9aa91150276       metrics-server-84c5f94fbc-hzswq
	0a6ee98dd8a8d       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   3d23d84ae289d       snapshot-controller-56fcc65765-c6vkl
	192e6a6258dcd       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   04507bf2b45d4       local-path-provisioner-86d989889c-hz7bq
	fcd3ca8f0f4b8       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   795e76c50a7f2       registry-66c9cd494c-z88fn
	0c7f58447765d       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   58debe67da51d       nvidia-device-plugin-daemonset-5mrzf
	3ce6d9c288751       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   c460eccce05f9       cloud-spanner-emulator-769b77f747-rgckg
	dbfbc7bc7744f       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   4db7bf9c989ed       coredns-7c65d6cfc9-vcslc
	8ae5227d209cf       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   7400a7fdab637       kube-ingress-dns-minikube
	45fbc1d61fd10       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   357720f9443e8       storage-provisioner
	2bb23ab99048f       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   daddf749fb331       kindnet-skpwn
	3a997b13f73a3       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   dc5c01a23e2c4       kube-proxy-rnn2j
	778b641a5b658       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   055b11fa5de6d       kube-scheduler-addons-478069
	2e1dc8ae45121       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   3d8396438193b       kube-controller-manager-addons-478069
	27bb9541e4b31       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   9c663aaae736e       kube-apiserver-addons-478069
	857921eb90a12       27e3830e14027       6 minutes ago       Running             etcd                                     0                   7444fc2e1c53c       etcd-addons-478069
	
	
	==> containerd <==
	Sep 14 17:40:28 addons-478069 containerd[820]: time="2024-09-14T17:40:28.039013606Z" level=info msg="RemovePodSandbox \"c91bc5bc7f34b6106aff74ba667f8f36c0a61e06463f1d50ab0a7986e3e22d45\" returns successfully"
	Sep 14 17:41:06 addons-478069 containerd[820]: time="2024-09-14T17:41:06.930003077Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 14 17:41:07 addons-478069 containerd[820]: time="2024-09-14T17:41:07.057467296Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 14 17:41:07 addons-478069 containerd[820]: time="2024-09-14T17:41:07.058995900Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 14 17:41:07 addons-478069 containerd[820]: time="2024-09-14T17:41:07.062553367Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 132.494142ms"
	Sep 14 17:41:07 addons-478069 containerd[820]: time="2024-09-14T17:41:07.062627755Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 14 17:41:07 addons-478069 containerd[820]: time="2024-09-14T17:41:07.065015426Z" level=info msg="CreateContainer within sandbox \"3b183ba736adfc849e8bab9ca1238e2ea995742daddadedcc891230b93b0f105\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 14 17:41:07 addons-478069 containerd[820]: time="2024-09-14T17:41:07.081597175Z" level=info msg="CreateContainer within sandbox \"3b183ba736adfc849e8bab9ca1238e2ea995742daddadedcc891230b93b0f105\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947\""
	Sep 14 17:41:07 addons-478069 containerd[820]: time="2024-09-14T17:41:07.082712061Z" level=info msg="StartContainer for \"3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947\""
	Sep 14 17:41:07 addons-478069 containerd[820]: time="2024-09-14T17:41:07.136333024Z" level=info msg="StartContainer for \"3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947\" returns successfully"
	Sep 14 17:41:08 addons-478069 containerd[820]: time="2024-09-14T17:41:08.737052081Z" level=info msg="shim disconnected" id=3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947 namespace=k8s.io
	Sep 14 17:41:08 addons-478069 containerd[820]: time="2024-09-14T17:41:08.737116745Z" level=warning msg="cleaning up after shim disconnected" id=3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947 namespace=k8s.io
	Sep 14 17:41:08 addons-478069 containerd[820]: time="2024-09-14T17:41:08.737127584Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 14 17:41:09 addons-478069 containerd[820]: time="2024-09-14T17:41:09.098651711Z" level=info msg="RemoveContainer for \"35e83b817e20222acf1123d48ea92a4f014dedae6f7d14015498ed158c9a9af1\""
	Sep 14 17:41:09 addons-478069 containerd[820]: time="2024-09-14T17:41:09.106277487Z" level=info msg="RemoveContainer for \"35e83b817e20222acf1123d48ea92a4f014dedae6f7d14015498ed158c9a9af1\" returns successfully"
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.043010707Z" level=info msg="RemoveContainer for \"d43d900122c87be12a2911858547f3747d0b9d4a29a27f0cff155590967e2c01\""
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.049396435Z" level=info msg="RemoveContainer for \"d43d900122c87be12a2911858547f3747d0b9d4a29a27f0cff155590967e2c01\" returns successfully"
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.051475677Z" level=info msg="StopPodSandbox for \"19c1dd37e03999d41dcf006a7161412930eb50634be2fceb4ce8c7b55bc0a754\""
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.059260394Z" level=info msg="TearDown network for sandbox \"19c1dd37e03999d41dcf006a7161412930eb50634be2fceb4ce8c7b55bc0a754\" successfully"
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.059300960Z" level=info msg="StopPodSandbox for \"19c1dd37e03999d41dcf006a7161412930eb50634be2fceb4ce8c7b55bc0a754\" returns successfully"
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.059915129Z" level=info msg="RemovePodSandbox for \"19c1dd37e03999d41dcf006a7161412930eb50634be2fceb4ce8c7b55bc0a754\""
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.059963900Z" level=info msg="Forcibly stopping sandbox \"19c1dd37e03999d41dcf006a7161412930eb50634be2fceb4ce8c7b55bc0a754\""
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.067456991Z" level=info msg="TearDown network for sandbox \"19c1dd37e03999d41dcf006a7161412930eb50634be2fceb4ce8c7b55bc0a754\" successfully"
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.073998378Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"19c1dd37e03999d41dcf006a7161412930eb50634be2fceb4ce8c7b55bc0a754\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 14 17:41:28 addons-478069 containerd[820]: time="2024-09-14T17:41:28.074106702Z" level=info msg="RemovePodSandbox \"19c1dd37e03999d41dcf006a7161412930eb50634be2fceb4ce8c7b55bc0a754\" returns successfully"
	
	
	==> coredns [dbfbc7bc7744f555eb1a2ea93ba73a2d22bea0a8eb822d05efaa4800368da6ca] <==
	[INFO] 10.244.0.11:37956 - 36503 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000160049s
	[INFO] 10.244.0.11:54643 - 4512 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003146162s
	[INFO] 10.244.0.11:54643 - 39844 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002920004s
	[INFO] 10.244.0.11:40974 - 48868 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000170781s
	[INFO] 10.244.0.11:40974 - 12769 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00017339s
	[INFO] 10.244.0.11:57076 - 64895 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130035s
	[INFO] 10.244.0.11:57076 - 4216 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000933611s
	[INFO] 10.244.0.11:53596 - 38219 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122412s
	[INFO] 10.244.0.11:53596 - 19783 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000115561s
	[INFO] 10.244.0.11:46907 - 45261 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000075512s
	[INFO] 10.244.0.11:46907 - 33230 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000074322s
	[INFO] 10.244.0.11:46108 - 12874 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001536941s
	[INFO] 10.244.0.11:46108 - 28489 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001914861s
	[INFO] 10.244.0.11:51004 - 61420 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000133899s
	[INFO] 10.244.0.11:51004 - 27114 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000216927s
	[INFO] 10.244.0.24:37783 - 4122 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00017417s
	[INFO] 10.244.0.24:40136 - 17091 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000175893s
	[INFO] 10.244.0.24:53867 - 11465 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139241s
	[INFO] 10.244.0.24:55152 - 61531 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00009207s
	[INFO] 10.244.0.24:40894 - 44520 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000302211s
	[INFO] 10.244.0.24:59614 - 62019 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000333489s
	[INFO] 10.244.0.24:44089 - 15611 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003823338s
	[INFO] 10.244.0.24:35827 - 63465 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.004028514s
	[INFO] 10.244.0.24:47010 - 57334 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002018679s
	[INFO] 10.244.0.24:57726 - 57265 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001659131s
	
	
	==> describe nodes <==
	Name:               addons-478069
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-478069
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=addons-478069
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T17_37_28_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-478069
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-478069"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 17:37:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-478069
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 17:43:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 17:40:31 +0000   Sat, 14 Sep 2024 17:37:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 17:40:31 +0000   Sat, 14 Sep 2024 17:37:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 17:40:31 +0000   Sat, 14 Sep 2024 17:37:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 17:40:31 +0000   Sat, 14 Sep 2024 17:37:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-478069
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d83bf3c096694a728a1258855dc89bb2
	  System UUID:                2a358f1a-2c47-4613-9d0f-a6af6be2fae7
	  Boot ID:                    35fd0b1a-e7ce-4152-9f40-0c82d6bd6d43
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-rgckg     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-5nf8v                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  gcp-auth                    gcp-auth-89d5ffd79-lb4jk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-wb82g    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-vcslc                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-6qjvq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-478069                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-skpwn                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-478069                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-478069       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m10s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 kube-proxy-rnn2j                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-478069                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-hzswq             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m58s
	  kube-system                 nvidia-device-plugin-daemonset-5mrzf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-z88fn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-c8cfs                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-4sfpd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-c6vkl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-hz7bq     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-fcqnn          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-jkmfm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-scheduler-576bc46687-4mvft          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-547j8              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node addons-478069 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x7 over 6m15s)  kubelet          Node addons-478069 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node addons-478069 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-478069 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-478069 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-478069 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s                   node-controller  Node addons-478069 event: Registered Node addons-478069 in Controller
	
	
	==> dmesg <==
	[Sep14 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014639] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.450141] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.835490] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.751208] kauditd_printk_skb: 36 callbacks suppressed
	[Sep14 16:29] hrtimer: interrupt took 40626028 ns
	[Sep14 17:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [857921eb90a126898e7cfb94ea3f9b4af89a60c0e640081a20c16cba07e35e33] <==
	{"level":"info","ts":"2024-09-14T17:37:21.821741Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-14T17:37:21.821957Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-14T17:37:21.821979Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-14T17:37:21.822081Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-14T17:37:21.822093Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-14T17:37:22.794984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-14T17:37:22.795209Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-14T17:37:22.795335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-14T17:37:22.795422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-14T17:37:22.795498Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T17:37:22.795564Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-14T17:37:22.795653Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-14T17:37:22.798830Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:37:22.800268Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-478069 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-14T17:37:22.800399Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:37:22.800829Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-14T17:37:22.801006Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:37:22.801177Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:37:22.801304Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-14T17:37:22.802202Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:37:22.802331Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-14T17:37:22.802491Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-14T17:37:22.803197Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-14T17:37:22.804206Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-14T17:37:22.803207Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [41f8e1ce80fef3ad1b7287ca52d4c33dd2de24a2c40f563c4522c69e6bc0a6da] <==
	2024/09/14 17:40:16 GCP Auth Webhook started!
	2024/09/14 17:40:34 Ready to marshal response ...
	2024/09/14 17:40:34 Ready to write response ...
	2024/09/14 17:40:35 Ready to marshal response ...
	2024/09/14 17:40:35 Ready to write response ...
	
	
	==> kernel <==
	 17:43:36 up  1:26,  0 users,  load average: 0.09, 1.23, 2.17
	Linux addons-478069 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [2bb23ab99048f2e6b5df785e9959f8ecce57b01cbf773e144018d9344f2ef156] <==
	I0914 17:41:35.000625       1 main.go:299] handling current node
	I0914 17:41:45.000188       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:41:45.000416       1 main.go:299] handling current node
	I0914 17:41:54.999977       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:41:55.000032       1 main.go:299] handling current node
	I0914 17:42:05.008854       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:42:05.008911       1 main.go:299] handling current node
	I0914 17:42:15.001196       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:42:15.001288       1 main.go:299] handling current node
	I0914 17:42:25.008673       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:42:25.008711       1 main.go:299] handling current node
	I0914 17:42:35.000448       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:42:35.000483       1 main.go:299] handling current node
	I0914 17:42:45.002254       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:42:45.002293       1 main.go:299] handling current node
	I0914 17:42:55.001596       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:42:55.001804       1 main.go:299] handling current node
	I0914 17:43:05.010272       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:43:05.010313       1 main.go:299] handling current node
	I0914 17:43:15.007901       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:43:15.007936       1 main.go:299] handling current node
	I0914 17:43:25.001223       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:43:25.001260       1 main.go:299] handling current node
	I0914 17:43:35.000348       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0914 17:43:35.000395       1 main.go:299] handling current node
	
	
	==> kube-apiserver [27bb9541e4b31f95d14a16f4a91aca7ee72156642e969968ba2d5970ad170c0d] <==
	W0914 17:38:47.341895       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:48.386598       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:48.892769       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.174.199:443: connect: connection refused
	E0914 17:38:48.892806       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.174.199:443: connect: connection refused" logger="UnhandledError"
	W0914 17:38:48.894679       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:48.934857       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.174.199:443: connect: connection refused
	E0914 17:38:48.934902       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.174.199:443: connect: connection refused" logger="UnhandledError"
	W0914 17:38:48.936624       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:49.408532       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:50.481749       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:51.489004       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:52.504995       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:53.549025       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:54.609626       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:55.710286       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:56.782778       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:38:57.857593       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.111.164.12:443: connect: connection refused
	W0914 17:39:08.794881       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.174.199:443: connect: connection refused
	E0914 17:39:08.794921       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.174.199:443: connect: connection refused" logger="UnhandledError"
	W0914 17:39:48.903987       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.174.199:443: connect: connection refused
	E0914 17:39:48.904032       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.174.199:443: connect: connection refused" logger="UnhandledError"
	W0914 17:39:48.943222       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.174.199:443: connect: connection refused
	E0914 17:39:48.943264       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.174.199:443: connect: connection refused" logger="UnhandledError"
	I0914 17:40:34.303675       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0914 17:40:34.342491       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [2e1dc8ae4512180f61723221f2c3843fcfb71e39cea40d158e9b94362c3306a0] <==
	I0914 17:39:48.931443       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 17:39:48.931696       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 17:39:48.945241       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 17:39:48.953602       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:48.966671       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:48.969225       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:48.977352       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:49.842560       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 17:39:49.859428       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:50.995304       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:51.023223       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 17:39:52.002199       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:52.014120       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:52.022013       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0914 17:39:52.028916       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 17:39:52.038185       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 17:39:52.043997       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0914 17:40:16.947370       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="9.699332ms"
	I0914 17:40:16.947736       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="158.334µs"
	I0914 17:40:22.028007       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0914 17:40:22.028352       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0914 17:40:22.082754       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0914 17:40:22.083245       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0914 17:40:31.934547       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-478069"
	I0914 17:40:34.012868       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [3a997b13f73a3be677719c476c38ec35ad87bdeaa08f7e5db3671d4b49227c21] <==
	I0914 17:37:34.527722       1 server_linux.go:66] "Using iptables proxy"
	I0914 17:37:34.651518       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0914 17:37:34.651579       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0914 17:37:34.708753       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0914 17:37:34.708849       1 server_linux.go:169] "Using iptables Proxier"
	I0914 17:37:34.715873       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0914 17:37:34.716392       1 server.go:483] "Version info" version="v1.31.1"
	I0914 17:37:34.716406       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0914 17:37:34.720949       1 config.go:199] "Starting service config controller"
	I0914 17:37:34.720983       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0914 17:37:34.721006       1 config.go:105] "Starting endpoint slice config controller"
	I0914 17:37:34.721010       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0914 17:37:34.721732       1 config.go:328] "Starting node config controller"
	I0914 17:37:34.721744       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0914 17:37:34.821884       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0914 17:37:34.821916       1 shared_informer.go:320] Caches are synced for service config
	I0914 17:37:34.821887       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [778b641a5b6588f4fc2831d0197ea9b08cf606f101e37c02f36065e7b67f599e] <==
	W0914 17:37:25.721018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 17:37:25.722322       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:25.721100       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 17:37:25.722417       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:25.721180       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0914 17:37:25.722588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:25.721212       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 17:37:25.722803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:25.721261       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 17:37:25.722958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:25.721293       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 17:37:25.723107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:26.561510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0914 17:37:26.561773       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:26.585121       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 17:37:26.585169       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:26.602951       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 17:37:26.602996       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:26.620788       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 17:37:26.620842       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:26.648466       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 17:37:26.648686       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0914 17:37:26.738561       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 17:37:26.738818       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0914 17:37:28.806420       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 14 17:41:35 addons-478069 kubelet[1496]: I0914 17:41:35.928248    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:41:35 addons-478069 kubelet[1496]: E0914 17:41:35.928445    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:41:49 addons-478069 kubelet[1496]: I0914 17:41:49.928485    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:41:49 addons-478069 kubelet[1496]: E0914 17:41:49.928683    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:41:54 addons-478069 kubelet[1496]: I0914 17:41:54.928656    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-c8cfs" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 17:42:04 addons-478069 kubelet[1496]: I0914 17:42:04.928619    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:42:04 addons-478069 kubelet[1496]: E0914 17:42:04.928864    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:42:05 addons-478069 kubelet[1496]: I0914 17:42:05.928023    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5mrzf" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 17:42:17 addons-478069 kubelet[1496]: I0914 17:42:17.929858    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:42:17 addons-478069 kubelet[1496]: E0914 17:42:17.930540    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:42:31 addons-478069 kubelet[1496]: I0914 17:42:31.929076    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:42:31 addons-478069 kubelet[1496]: E0914 17:42:31.929826    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:42:42 addons-478069 kubelet[1496]: I0914 17:42:42.928186    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:42:42 addons-478069 kubelet[1496]: E0914 17:42:42.928400    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:42:55 addons-478069 kubelet[1496]: I0914 17:42:55.927780    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-z88fn" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 17:42:56 addons-478069 kubelet[1496]: I0914 17:42:56.928089    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:42:56 addons-478069 kubelet[1496]: E0914 17:42:56.928304    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:42:57 addons-478069 kubelet[1496]: I0914 17:42:57.929289    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-c8cfs" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 17:43:09 addons-478069 kubelet[1496]: I0914 17:43:09.928013    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:43:09 addons-478069 kubelet[1496]: E0914 17:43:09.928838    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:43:24 addons-478069 kubelet[1496]: I0914 17:43:24.928216    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:43:24 addons-478069 kubelet[1496]: E0914 17:43:24.928506    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	Sep 14 17:43:32 addons-478069 kubelet[1496]: I0914 17:43:32.928217    1496 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5mrzf" secret="" err="secret \"gcp-auth\" not found"
	Sep 14 17:43:36 addons-478069 kubelet[1496]: I0914 17:43:36.927733    1496 scope.go:117] "RemoveContainer" containerID="3cf32c0e8215aac267a517b1b91cd1e439d37d4e4fe188f710722004d8a7d947"
	Sep 14 17:43:36 addons-478069 kubelet[1496]: E0914 17:43:36.927934    1496 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-5nf8v_gadget(2d1cefd4-5000-4feb-b972-b1fa0c546aa3)\"" pod="gadget/gadget-5nf8v" podUID="2d1cefd4-5000-4feb-b972-b1fa0c546aa3"
	
	
	==> storage-provisioner [45fbc1d61fd1033e734883a10ae76c090d58ede49d17d4872c0d6cbef2a8e6df] <==
	I0914 17:37:39.124832       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 17:37:39.137092       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 17:37:39.137139       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 17:37:39.147035       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 17:37:39.147286       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-478069_64524dcf-84e7-4844-a6dc-13fdad5e0ecb!
	I0914 17:37:39.148590       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"602a84e3-bcf9-4833-8849-123b7018645b", APIVersion:"v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-478069_64524dcf-84e7-4844-a6dc-13fdad5e0ecb became leader
	I0914 17:37:39.247697       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-478069_64524dcf-84e7-4844-a6dc-13fdad5e0ecb!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-478069 -n addons-478069
helpers_test.go:261: (dbg) Run:  kubectl --context addons-478069 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-2c9fn ingress-nginx-admission-patch-nknq9 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-478069 describe pod ingress-nginx-admission-create-2c9fn ingress-nginx-admission-patch-nknq9 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-478069 describe pod ingress-nginx-admission-create-2c9fn ingress-nginx-admission-patch-nknq9 test-job-nginx-0: exit status 1 (91.229795ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2c9fn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-nknq9" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-478069 describe pod ingress-nginx-admission-create-2c9fn ingress-nginx-admission-patch-nknq9 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (376.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-947842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0914 18:27:10.954553  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p old-k8s-version-947842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: exit status 80 (6m12.374717888s)

                                                
                                                
-- stdout --
	* [old-k8s-version-947842] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	* Using the docker driver based on existing profile
	* Starting "old-k8s-version-947842" primary control-plane node in "old-k8s-version-947842" cluster
	* Pulling base image v0.0.45-1726281268-19643 ...
	* Restarting existing docker container for "old-k8s-version-947842" ...
	* Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-947842 addons enable metrics-server
	
	* Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:26:42.324077  502178 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:26:42.324311  502178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:26:42.324340  502178 out.go:358] Setting ErrFile to fd 2...
	I0914 18:26:42.324359  502178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:26:42.324702  502178 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 18:26:42.325195  502178 out.go:352] Setting JSON to false
	I0914 18:26:42.326269  502178 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7754,"bootTime":1726330648,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:26:42.326391  502178 start.go:139] virtualization:  
	I0914 18:26:42.329271  502178 out.go:177] * [old-k8s-version-947842] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 18:26:42.331293  502178 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:26:42.331373  502178 notify.go:220] Checking for updates...
	I0914 18:26:42.336761  502178 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:26:42.338884  502178 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 18:26:42.340778  502178 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	I0914 18:26:42.342897  502178 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:26:42.345556  502178 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:26:42.347981  502178 config.go:182] Loaded profile config "old-k8s-version-947842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0914 18:26:42.350764  502178 out.go:177] * Kubernetes 1.31.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.1
	I0914 18:26:42.352495  502178 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:26:42.398076  502178 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 18:26:42.398210  502178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:26:42.491907  502178 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-14 18:26:42.480352354 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 18:26:42.492015  502178 docker.go:318] overlay module found
	I0914 18:26:42.495986  502178 out.go:177] * Using the docker driver based on existing profile
	I0914 18:26:42.497772  502178 start.go:297] selected driver: docker
	I0914 18:26:42.497790  502178 start.go:901] validating driver "docker" against &{Name:old-k8s-version-947842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947842 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:26:42.497921  502178 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:26:42.498497  502178 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:26:42.623130  502178 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2024-09-14 18:26:42.610473698 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 18:26:42.623539  502178 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:26:42.623574  502178 cni.go:84] Creating CNI manager for ""
	I0914 18:26:42.623652  502178 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:26:42.623705  502178 start.go:340] cluster config:
	{Name:old-k8s-version-947842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:26:42.626210  502178 out.go:177] * Starting "old-k8s-version-947842" primary control-plane node in "old-k8s-version-947842" cluster
	I0914 18:26:42.627904  502178 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 18:26:42.633045  502178 out.go:177] * Pulling base image v0.0.45-1726281268-19643 ...
	I0914 18:26:42.634918  502178 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 18:26:42.634989  502178 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0914 18:26:42.635011  502178 cache.go:56] Caching tarball of preloaded images
	I0914 18:26:42.635104  502178 preload.go:172] Found /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 18:26:42.635119  502178 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0914 18:26:42.635239  502178 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/config.json ...
	I0914 18:26:42.635455  502178 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local docker daemon
	W0914 18:26:42.658634  502178 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e is of wrong architecture
	I0914 18:26:42.658652  502178 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 18:26:42.658724  502178 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory
	I0914 18:26:42.658749  502178 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory, skipping pull
	I0914 18:26:42.658754  502178 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e exists in cache, skipping pull
	I0914 18:26:42.658761  502178 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e as a tarball
	I0914 18:26:42.658766  502178 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e from local cache
	I0914 18:26:42.789088  502178 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e from cached tarball
	I0914 18:26:42.789127  502178 cache.go:194] Successfully downloaded all kic artifacts
	I0914 18:26:42.789172  502178 start.go:360] acquireMachinesLock for old-k8s-version-947842: {Name:mke5e5e405782aa58eba63071293807e0672b5c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:26:42.789248  502178 start.go:364] duration metric: took 45.038µs to acquireMachinesLock for "old-k8s-version-947842"
	I0914 18:26:42.789271  502178 start.go:96] Skipping create...Using existing machine configuration
	I0914 18:26:42.789279  502178 fix.go:54] fixHost starting: 
	I0914 18:26:42.789553  502178 cli_runner.go:164] Run: docker container inspect old-k8s-version-947842 --format={{.State.Status}}
	I0914 18:26:42.817917  502178 fix.go:112] recreateIfNeeded on old-k8s-version-947842: state=Stopped err=<nil>
	W0914 18:26:42.817950  502178 fix.go:138] unexpected machine state, will restart: <nil>
	I0914 18:26:42.820630  502178 out.go:177] * Restarting existing docker container for "old-k8s-version-947842" ...
	I0914 18:26:42.822446  502178 cli_runner.go:164] Run: docker start old-k8s-version-947842
	I0914 18:26:43.181870  502178 cli_runner.go:164] Run: docker container inspect old-k8s-version-947842 --format={{.State.Status}}
	I0914 18:26:43.200510  502178 kic.go:430] container "old-k8s-version-947842" state is running.
	I0914 18:26:43.201339  502178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-947842
	I0914 18:26:43.262733  502178 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/config.json ...
	I0914 18:26:43.262954  502178 machine.go:93] provisionDockerMachine start ...
	I0914 18:26:43.263013  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:43.316934  502178 main.go:141] libmachine: Using SSH client type: native
	I0914 18:26:43.317213  502178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0914 18:26:43.317223  502178 main.go:141] libmachine: About to run SSH command:
	hostname
	I0914 18:26:43.318311  502178 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54214->127.0.0.1:33433: read: connection reset by peer
	I0914 18:26:46.466954  502178 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947842
	
	I0914 18:26:46.467035  502178 ubuntu.go:169] provisioning hostname "old-k8s-version-947842"
	I0914 18:26:46.467139  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:46.497488  502178 main.go:141] libmachine: Using SSH client type: native
	I0914 18:26:46.497766  502178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0914 18:26:46.497780  502178 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-947842 && echo "old-k8s-version-947842" | sudo tee /etc/hostname
	I0914 18:26:46.688001  502178 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-947842
	
	I0914 18:26:46.688096  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:46.717855  502178 main.go:141] libmachine: Using SSH client type: native
	I0914 18:26:46.718118  502178 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I0914 18:26:46.718143  502178 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-947842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-947842/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-947842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0914 18:26:46.876926  502178 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0914 18:26:46.876950  502178 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19643-292860/.minikube CaCertPath:/home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19643-292860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19643-292860/.minikube}
	I0914 18:26:46.876979  502178 ubuntu.go:177] setting up certificates
	I0914 18:26:46.876990  502178 provision.go:84] configureAuth start
	I0914 18:26:46.877048  502178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-947842
	I0914 18:26:46.914367  502178 provision.go:143] copyHostCerts
	I0914 18:26:46.914433  502178 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-292860/.minikube/cert.pem, removing ...
	I0914 18:26:46.914445  502178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-292860/.minikube/cert.pem
	I0914 18:26:46.914520  502178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19643-292860/.minikube/cert.pem (1123 bytes)
	I0914 18:26:46.914625  502178 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-292860/.minikube/key.pem, removing ...
	I0914 18:26:46.914631  502178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-292860/.minikube/key.pem
	I0914 18:26:46.914658  502178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19643-292860/.minikube/key.pem (1675 bytes)
	I0914 18:26:46.914712  502178 exec_runner.go:144] found /home/jenkins/minikube-integration/19643-292860/.minikube/ca.pem, removing ...
	I0914 18:26:46.914717  502178 exec_runner.go:203] rm: /home/jenkins/minikube-integration/19643-292860/.minikube/ca.pem
	I0914 18:26:46.914739  502178 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19643-292860/.minikube/ca.pem (1082 bytes)
	I0914 18:26:46.914784  502178 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19643-292860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-947842 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-947842]
	I0914 18:26:48.240023  502178 provision.go:177] copyRemoteCerts
	I0914 18:26:48.240102  502178 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0914 18:26:48.240155  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:48.335859  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:48.466453  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0914 18:26:48.506349  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0914 18:26:48.547424  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0914 18:26:48.578241  502178 provision.go:87] duration metric: took 1.701227346s to configureAuth
	I0914 18:26:48.578271  502178 ubuntu.go:193] setting minikube options for container-runtime
	I0914 18:26:48.578492  502178 config.go:182] Loaded profile config "old-k8s-version-947842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0914 18:26:48.578506  502178 machine.go:96] duration metric: took 5.315544639s to provisionDockerMachine
	I0914 18:26:48.578522  502178 start.go:293] postStartSetup for "old-k8s-version-947842" (driver="docker")
	I0914 18:26:48.578535  502178 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0914 18:26:48.578605  502178 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0914 18:26:48.578656  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:48.602566  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:48.709840  502178 ssh_runner.go:195] Run: cat /etc/os-release
	I0914 18:26:48.714027  502178 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0914 18:26:48.714061  502178 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0914 18:26:48.714073  502178 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0914 18:26:48.714080  502178 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0914 18:26:48.714090  502178 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-292860/.minikube/addons for local assets ...
	I0914 18:26:48.714159  502178 filesync.go:126] Scanning /home/jenkins/minikube-integration/19643-292860/.minikube/files for local assets ...
	I0914 18:26:48.714235  502178 filesync.go:149] local asset: /home/jenkins/minikube-integration/19643-292860/.minikube/files/etc/ssl/certs/2982552.pem -> 2982552.pem in /etc/ssl/certs
	I0914 18:26:48.714351  502178 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0914 18:26:48.724998  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/files/etc/ssl/certs/2982552.pem --> /etc/ssl/certs/2982552.pem (1708 bytes)
	I0914 18:26:48.757941  502178 start.go:296] duration metric: took 179.401453ms for postStartSetup
	I0914 18:26:48.758029  502178 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 18:26:48.758072  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:48.778834  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:48.876766  502178 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0914 18:26:48.882976  502178 fix.go:56] duration metric: took 6.093687496s for fixHost
	I0914 18:26:48.883002  502178 start.go:83] releasing machines lock for "old-k8s-version-947842", held for 6.093742766s
	I0914 18:26:48.883074  502178 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-947842
	I0914 18:26:48.900947  502178 ssh_runner.go:195] Run: cat /version.json
	I0914 18:26:48.900983  502178 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0914 18:26:48.901005  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:48.901141  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:48.921997  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:48.939501  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:49.163352  502178 ssh_runner.go:195] Run: systemctl --version
	I0914 18:26:49.169975  502178 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0914 18:26:49.175661  502178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0914 18:26:49.200093  502178 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0914 18:26:49.200280  502178 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0914 18:26:49.212862  502178 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0914 18:26:49.212954  502178 start.go:495] detecting cgroup driver to use...
	I0914 18:26:49.213027  502178 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0914 18:26:49.213130  502178 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0914 18:26:49.231044  502178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0914 18:26:49.247699  502178 docker.go:217] disabling cri-docker service (if available) ...
	I0914 18:26:49.247838  502178 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0914 18:26:49.265165  502178 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0914 18:26:49.281083  502178 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0914 18:26:49.421034  502178 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0914 18:26:49.561981  502178 docker.go:233] disabling docker service ...
	I0914 18:26:49.562124  502178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0914 18:26:49.584431  502178 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0914 18:26:49.598053  502178 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0914 18:26:49.696217  502178 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0914 18:26:49.813925  502178 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0914 18:26:49.825889  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0914 18:26:49.843699  502178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0914 18:26:49.855408  502178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0914 18:26:49.869103  502178 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0914 18:26:49.869201  502178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0914 18:26:49.881977  502178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:26:49.893939  502178 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0914 18:26:49.906569  502178 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0914 18:26:49.917101  502178 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0914 18:26:49.926900  502178 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0914 18:26:49.937560  502178 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0914 18:26:49.950232  502178 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0914 18:26:49.959247  502178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:26:50.080453  502178 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0914 18:26:50.298553  502178 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0914 18:26:50.298672  502178 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0914 18:26:50.302729  502178 start.go:563] Will wait 60s for crictl version
	I0914 18:26:50.302850  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:26:50.307677  502178 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0914 18:26:50.361487  502178 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0914 18:26:50.361613  502178 ssh_runner.go:195] Run: containerd --version
	I0914 18:26:50.385529  502178 ssh_runner.go:195] Run: containerd --version
	I0914 18:26:50.411105  502178 out.go:177] * Preparing Kubernetes v1.20.0 on containerd 1.7.22 ...
	I0914 18:26:50.412763  502178 cli_runner.go:164] Run: docker network inspect old-k8s-version-947842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0914 18:26:50.439550  502178 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0914 18:26:50.443323  502178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:26:50.454324  502178 kubeadm.go:883] updating cluster {Name:old-k8s-version-947842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947842 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0914 18:26:50.454443  502178 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 18:26:50.454502  502178 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:26:50.498494  502178 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 18:26:50.498519  502178 containerd.go:534] Images already preloaded, skipping extraction
	I0914 18:26:50.498611  502178 ssh_runner.go:195] Run: sudo crictl images --output json
	I0914 18:26:50.553550  502178 containerd.go:627] all images are preloaded for containerd runtime.
	I0914 18:26:50.553573  502178 cache_images.go:84] Images are preloaded, skipping loading
	I0914 18:26:50.553581  502178 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.20.0 containerd true true} ...
	I0914 18:26:50.553746  502178 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=old-k8s-version-947842 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0914 18:26:50.553836  502178 ssh_runner.go:195] Run: sudo crictl info
	I0914 18:26:50.601338  502178 cni.go:84] Creating CNI manager for ""
	I0914 18:26:50.601360  502178 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:26:50.601370  502178 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0914 18:26:50.601390  502178 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-947842 NodeName:old-k8s-version-947842 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0914 18:26:50.601511  502178 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "old-k8s-version-947842"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0914 18:26:50.601578  502178 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0914 18:26:50.610687  502178 binaries.go:44] Found k8s binaries, skipping transfer
	I0914 18:26:50.610805  502178 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0914 18:26:50.619364  502178 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (442 bytes)
	I0914 18:26:50.637373  502178 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0914 18:26:50.655099  502178 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2125 bytes)
	I0914 18:26:50.673526  502178 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0914 18:26:50.677444  502178 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0914 18:26:50.687996  502178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:26:50.795553  502178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:26:50.814431  502178 certs.go:68] Setting up /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842 for IP: 192.168.76.2
	I0914 18:26:50.814492  502178 certs.go:194] generating shared ca certs ...
	I0914 18:26:50.814533  502178 certs.go:226] acquiring lock for ca certs: {Name:mkf21090b38f44552475e7c85ae32e95553c36bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:26:50.814719  502178 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/19643-292860/.minikube/ca.key
	I0914 18:26:50.814766  502178 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.key
	I0914 18:26:50.814774  502178 certs.go:256] generating profile certs ...
	I0914 18:26:50.814860  502178 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.key
	I0914 18:26:50.814922  502178 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/apiserver.key.030cf229
	I0914 18:26:50.814958  502178 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/proxy-client.key
	I0914 18:26:50.815061  502178 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/298255.pem (1338 bytes)
	W0914 18:26:50.815088  502178 certs.go:480] ignoring /home/jenkins/minikube-integration/19643-292860/.minikube/certs/298255_empty.pem, impossibly tiny 0 bytes
	I0914 18:26:50.815096  502178 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca-key.pem (1679 bytes)
	I0914 18:26:50.815119  502178 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/ca.pem (1082 bytes)
	I0914 18:26:50.815141  502178 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/cert.pem (1123 bytes)
	I0914 18:26:50.815162  502178 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/certs/key.pem (1675 bytes)
	I0914 18:26:50.815201  502178 certs.go:484] found cert: /home/jenkins/minikube-integration/19643-292860/.minikube/files/etc/ssl/certs/2982552.pem (1708 bytes)
	I0914 18:26:50.815874  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0914 18:26:50.878873  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0914 18:26:50.934688  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0914 18:26:50.989868  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0914 18:26:51.038283  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0914 18:26:51.074073  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0914 18:26:51.107342  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0914 18:26:51.143695  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0914 18:26:51.178499  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/certs/298255.pem --> /usr/share/ca-certificates/298255.pem (1338 bytes)
	I0914 18:26:51.208767  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/files/etc/ssl/certs/2982552.pem --> /usr/share/ca-certificates/2982552.pem (1708 bytes)
	I0914 18:26:51.248738  502178 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19643-292860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0914 18:26:51.276724  502178 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0914 18:26:51.296378  502178 ssh_runner.go:195] Run: openssl version
	I0914 18:26:51.302167  502178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0914 18:26:51.312606  502178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:26:51.316582  502178 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 14 17:37 /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:26:51.316710  502178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0914 18:26:51.323959  502178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0914 18:26:51.333836  502178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/298255.pem && ln -fs /usr/share/ca-certificates/298255.pem /etc/ssl/certs/298255.pem"
	I0914 18:26:51.344088  502178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/298255.pem
	I0914 18:26:51.347997  502178 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 14 17:47 /usr/share/ca-certificates/298255.pem
	I0914 18:26:51.348120  502178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/298255.pem
	I0914 18:26:51.355496  502178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/298255.pem /etc/ssl/certs/51391683.0"
	I0914 18:26:51.365646  502178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2982552.pem && ln -fs /usr/share/ca-certificates/2982552.pem /etc/ssl/certs/2982552.pem"
	I0914 18:26:51.375835  502178 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2982552.pem
	I0914 18:26:51.379805  502178 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 14 17:47 /usr/share/ca-certificates/2982552.pem
	I0914 18:26:51.379935  502178 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2982552.pem
	I0914 18:26:51.387122  502178 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2982552.pem /etc/ssl/certs/3ec20f2e.0"
	I0914 18:26:51.396957  502178 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0914 18:26:51.400965  502178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0914 18:26:51.408207  502178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0914 18:26:51.415422  502178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0914 18:26:51.422747  502178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0914 18:26:51.429970  502178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0914 18:26:51.437780  502178 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0914 18:26:51.444917  502178 kubeadm.go:392] StartCluster: {Name:old-k8s-version-947842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-947842 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/hom
e/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:26:51.445075  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0914 18:26:51.445164  502178 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0914 18:26:51.512678  502178 cri.go:89] found id: "c87aaf8daeeeea40058b74b7d5b95ec822b76e8baa48a3c6471522e852c3c6f1"
	I0914 18:26:51.512761  502178 cri.go:89] found id: "1559da34b47646301b138b6609891e4e385b98be7dfb19ec4a58e286b569f3bb"
	I0914 18:26:51.512782  502178 cri.go:89] found id: "dd62f1eaa6ef5422268dfe04da8bbde63587b9d988c946a7820e3a08fe250b00"
	I0914 18:26:51.512802  502178 cri.go:89] found id: "72df987200414e54ca3d144bf10d64ca95c7922359fc2e2386694ec8173c1e67"
	I0914 18:26:51.512832  502178 cri.go:89] found id: "d37fe18d4149efe0c0585d15624e1b685b673fe1e1318b8d8576a613caea0cd6"
	I0914 18:26:51.512852  502178 cri.go:89] found id: "a95b8ab1729a1aa8c7a7c40de867414e37edf9cdc76797052a9cca425d7bca7f"
	I0914 18:26:51.512871  502178 cri.go:89] found id: "a29bb42affc63d1ac4d075c5a582547362849f78991e81dccb9e9383a9acc1e6"
	I0914 18:26:51.512889  502178 cri.go:89] found id: "a0ffc6994576d8808d2bc3654f8e6857aae39c23082101d31933371d294b3a54"
	I0914 18:26:51.512918  502178 cri.go:89] found id: ""
	I0914 18:26:51.513015  502178 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I0914 18:26:51.526597  502178 cri.go:116] JSON = null
	W0914 18:26:51.526711  502178 kubeadm.go:399] unpause failed: list paused: list returned 0 containers, but ps returned 8
	I0914 18:26:51.526798  502178 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0914 18:26:51.536651  502178 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0914 18:26:51.536719  502178 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0914 18:26:51.536815  502178 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0914 18:26:51.545442  502178 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0914 18:26:51.546032  502178 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-947842" does not appear in /home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 18:26:51.546217  502178 kubeconfig.go:62] /home/jenkins/minikube-integration/19643-292860/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-947842" cluster setting kubeconfig missing "old-k8s-version-947842" context setting]
	I0914 18:26:51.546587  502178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/kubeconfig: {Name:mke326c789f0dca4467afe86488dc47fc7003eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:26:51.548193  502178 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0914 18:26:51.557690  502178 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.76.2
	I0914 18:26:51.557723  502178 kubeadm.go:597] duration metric: took 20.983931ms to restartPrimaryControlPlane
	I0914 18:26:51.557734  502178 kubeadm.go:394] duration metric: took 112.825656ms to StartCluster
	I0914 18:26:51.557750  502178 settings.go:142] acquiring lock: {Name:mk211baf85a5d12c53e1bc3687f6aa07604e6004 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:26:51.557815  502178 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 18:26:51.558404  502178 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/kubeconfig: {Name:mke326c789f0dca4467afe86488dc47fc7003eaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 18:26:51.558602  502178 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 18:26:51.558873  502178 config.go:182] Loaded profile config "old-k8s-version-947842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0914 18:26:51.558919  502178 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0914 18:26:51.558996  502178 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-947842"
	I0914 18:26:51.559016  502178 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-947842"
	W0914 18:26:51.559027  502178 addons.go:243] addon storage-provisioner should already be in state true
	I0914 18:26:51.559052  502178 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-947842"
	I0914 18:26:51.559068  502178 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-947842"
	I0914 18:26:51.559399  502178 cli_runner.go:164] Run: docker container inspect old-k8s-version-947842 --format={{.State.Status}}
	I0914 18:26:51.559576  502178 host.go:66] Checking if "old-k8s-version-947842" exists ...
	I0914 18:26:51.560041  502178 cli_runner.go:164] Run: docker container inspect old-k8s-version-947842 --format={{.State.Status}}
	I0914 18:26:51.560420  502178 addons.go:69] Setting dashboard=true in profile "old-k8s-version-947842"
	I0914 18:26:51.560441  502178 addons.go:234] Setting addon dashboard=true in "old-k8s-version-947842"
	W0914 18:26:51.560448  502178 addons.go:243] addon dashboard should already be in state true
	I0914 18:26:51.560473  502178 host.go:66] Checking if "old-k8s-version-947842" exists ...
	I0914 18:26:51.560900  502178 cli_runner.go:164] Run: docker container inspect old-k8s-version-947842 --format={{.State.Status}}
	I0914 18:26:51.561069  502178 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-947842"
	I0914 18:26:51.561085  502178 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-947842"
	W0914 18:26:51.561091  502178 addons.go:243] addon metrics-server should already be in state true
	I0914 18:26:51.561111  502178 host.go:66] Checking if "old-k8s-version-947842" exists ...
	I0914 18:26:51.561494  502178 cli_runner.go:164] Run: docker container inspect old-k8s-version-947842 --format={{.State.Status}}
	I0914 18:26:51.567919  502178 out.go:177] * Verifying Kubernetes components...
	I0914 18:26:51.570203  502178 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0914 18:26:51.615661  502178 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0914 18:26:51.617487  502178 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:26:51.617510  502178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0914 18:26:51.617568  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:51.625242  502178 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-947842"
	W0914 18:26:51.625275  502178 addons.go:243] addon default-storageclass should already be in state true
	I0914 18:26:51.625307  502178 host.go:66] Checking if "old-k8s-version-947842" exists ...
	I0914 18:26:51.628505  502178 cli_runner.go:164] Run: docker container inspect old-k8s-version-947842 --format={{.State.Status}}
	I0914 18:26:51.637159  502178 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0914 18:26:51.638792  502178 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0914 18:26:51.638818  502178 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0914 18:26:51.638898  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:51.677363  502178 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0914 18:26:51.684796  502178 out.go:177]   - Using image registry.k8s.io/echoserver:1.4
	I0914 18:26:51.686643  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0914 18:26:51.686665  502178 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0914 18:26:51.686759  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:51.686941  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:51.690644  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:51.717527  502178 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0914 18:26:51.717548  502178 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0914 18:26:51.717609  502178 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-947842
	I0914 18:26:51.722677  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:51.751763  502178 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/old-k8s-version-947842/id_rsa Username:docker}
	I0914 18:26:51.801518  502178 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0914 18:26:51.881243  502178 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-947842" to be "Ready" ...
	I0914 18:26:51.916599  502178 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0914 18:26:51.916623  502178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0914 18:26:51.942038  502178 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0914 18:26:51.942065  502178 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0914 18:26:51.987326  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:26:51.990045  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0914 18:26:51.990076  502178 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0914 18:26:52.023154  502178 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:26:52.023222  502178 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0914 18:26:52.033405  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:26:52.099489  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0914 18:26:52.099585  502178 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0914 18:26:52.129346  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:26:52.183822  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0914 18:26:52.183888  502178 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0914 18:26:52.265738  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0914 18:26:52.265801  502178 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0914 18:26:52.326959  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.327005  502178 retry.go:31] will retry after 329.440813ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.360625  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0914 18:26:52.360663  502178 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0914 18:26:52.395679  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.395711  502178 retry.go:31] will retry after 129.203415ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.445181  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0914 18:26:52.445219  502178 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0914 18:26:52.453798  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.453828  502178 retry.go:31] will retry after 182.408821ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.471237  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0914 18:26:52.471262  502178 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0914 18:26:52.492817  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0914 18:26:52.492887  502178 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0914 18:26:52.512840  502178 addons.go:431] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 18:26:52.512911  502178 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0914 18:26:52.525064  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:26:52.537226  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 18:26:52.635636  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.635669  502178 retry.go:31] will retry after 342.18553ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.636831  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:26:52.657557  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 18:26:52.657766  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.657815  502178 retry.go:31] will retry after 286.479604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 18:26:52.806741  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.806832  502178 retry.go:31] will retry after 422.491182ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 18:26:52.837459  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.837543  502178 retry.go:31] will retry after 506.422806ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:52.945346  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 18:26:52.978621  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 18:26:53.123368  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.123401  502178 retry.go:31] will retry after 418.543783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 18:26:53.136121  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.136204  502178 retry.go:31] will retry after 530.566936ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.230308  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0914 18:26:53.327878  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.327958  502178 retry.go:31] will retry after 775.745068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.344292  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 18:26:53.437286  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.437382  502178 retry.go:31] will retry after 657.947414ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.542558  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 18:26:53.638858  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.638938  502178 retry.go:31] will retry after 657.885023ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.667212  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 18:26:53.763147  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.763225  502178 retry.go:31] will retry after 978.083342ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:53.882765  502178 node_ready.go:53] error getting node "old-k8s-version-947842": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-947842": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 18:26:54.096065  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:26:54.104519  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0914 18:26:54.221285  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:54.221323  502178 retry.go:31] will retry after 785.660536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 18:26:54.272913  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:54.272946  502178 retry.go:31] will retry after 505.072111ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:54.297236  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 18:26:54.398977  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:54.399011  502178 retry.go:31] will retry after 795.410216ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:54.741513  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:26:54.778809  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0914 18:26:54.841767  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:54.841800  502178 retry.go:31] will retry after 791.853245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 18:26:54.951459  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:54.951496  502178 retry.go:31] will retry after 856.950345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:55.008241  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 18:26:55.112132  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:55.112168  502178 retry.go:31] will retry after 1.001232697s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:55.195470  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 18:26:55.296279  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:55.296314  502178 retry.go:31] will retry after 1.447511489s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:55.634345  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 18:26:55.731736  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:55.731767  502178 retry.go:31] will retry after 1.557395495s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:55.809043  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0914 18:26:55.919976  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:55.920067  502178 retry.go:31] will retry after 2.576487375s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:56.114321  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 18:26:56.222483  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:56.222564  502178 retry.go:31] will retry after 1.018204858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:56.382036  502178 node_ready.go:53] error getting node "old-k8s-version-947842": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-947842": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 18:26:56.744368  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 18:26:56.841781  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:56.841866  502178 retry.go:31] will retry after 1.072618821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:57.241567  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:26:57.290165  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 18:26:57.458679  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:57.458707  502178 retry.go:31] will retry after 1.618299998s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	W0914 18:26:57.624300  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:57.624334  502178 retry.go:31] will retry after 3.21595693s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:57.914701  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W0914 18:26:58.024669  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:58.024710  502178 retry.go:31] will retry after 3.803642115s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:58.497100  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0914 18:26:58.675312  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:58.675343  502178 retry.go:31] will retry after 3.776427404s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:58.882102  502178 node_ready.go:53] error getting node "old-k8s-version-947842": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-947842": dial tcp 192.168.76.2:8443: connect: connection refused
	I0914 18:26:59.077573  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0914 18:26:59.176297  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:26:59.176330  502178 retry.go:31] will retry after 4.004424369s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:27:00.840490  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0914 18:27:01.190812  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:27:01.190846  502178 retry.go:31] will retry after 4.242772804s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I0914 18:27:01.829491  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 18:27:02.452251  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0914 18:27:03.181201  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0914 18:27:05.433815  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0914 18:27:11.383036  502178 node_ready.go:53] error getting node "old-k8s-version-947842": Get "https://192.168.76.2:8443/api/v1/nodes/old-k8s-version-947842": net/http: TLS handshake timeout
	I0914 18:27:12.167046  502178 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (10.33731958s)
	W0914 18:27:12.167097  502178 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0914 18:27:12.167116  502178 retry.go:31] will retry after 5.329975992s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	I0914 18:27:13.053863  502178 node_ready.go:49] node "old-k8s-version-947842" has status "Ready":"True"
	I0914 18:27:13.053888  502178 node_ready.go:38] duration metric: took 21.172567681s for node "old-k8s-version-947842" to be "Ready" ...
	I0914 18:27:13.053900  502178 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:27:13.374400  502178 pod_ready.go:79] waiting up to 6m0s for pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace to be "Ready" ...
	I0914 18:27:14.980058  502178 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.798816342s)
	I0914 18:27:14.980150  502178 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (9.546313894s)
	I0914 18:27:14.980509  502178 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (12.52822285s)
	I0914 18:27:14.980569  502178 addons.go:475] Verifying addon metrics-server=true in "old-k8s-version-947842"
	I0914 18:27:15.394541  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:17.497937  502178 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0914 18:27:17.892561  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:18.289896  502178 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-947842 addons enable metrics-server
	
	I0914 18:27:18.291760  502178 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I0914 18:27:18.293261  502178 addons.go:510] duration metric: took 26.734339906s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I0914 18:27:20.381703  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:22.381785  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:24.880956  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:26.881057  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:28.881491  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:31.381431  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:33.381934  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:35.382145  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:37.881783  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:39.887381  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:42.382195  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:44.881927  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:46.882495  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:49.380890  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:51.381828  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:53.880371  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:55.881879  502178 pod_ready.go:103] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"False"
	I0914 18:27:57.880678  502178 pod_ready.go:93] pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace has status "Ready":"True"
	I0914 18:27:57.880705  502178 pod_ready.go:82] duration metric: took 44.506214926s for pod "coredns-74ff55c5b-nr99f" in "kube-system" namespace to be "Ready" ...
	I0914 18:27:57.880717  502178 pod_ready.go:79] waiting up to 6m0s for pod "etcd-old-k8s-version-947842" in "kube-system" namespace to be "Ready" ...
	I0914 18:27:57.885745  502178 pod_ready.go:93] pod "etcd-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"True"
	I0914 18:27:57.885773  502178 pod_ready.go:82] duration metric: took 5.047844ms for pod "etcd-old-k8s-version-947842" in "kube-system" namespace to be "Ready" ...
	I0914 18:27:57.885788  502178 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-947842" in "kube-system" namespace to be "Ready" ...
	I0914 18:27:57.891090  502178 pod_ready.go:93] pod "kube-apiserver-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"True"
	I0914 18:27:57.891114  502178 pod_ready.go:82] duration metric: took 5.319006ms for pod "kube-apiserver-old-k8s-version-947842" in "kube-system" namespace to be "Ready" ...
	I0914 18:27:57.891125  502178 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace to be "Ready" ...
	I0914 18:27:59.897889  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:02.397488  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:04.398238  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:06.896873  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:08.898988  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:11.398276  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:13.398781  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:15.401428  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:17.897956  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:19.898413  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:22.397226  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:24.399725  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:26.897254  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:28.898294  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:30.900727  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:33.421982  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:35.897045  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:37.897818  502178 pod_ready.go:103] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:38.897776  502178 pod_ready.go:93] pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"True"
	I0914 18:28:38.897801  502178 pod_ready.go:82] duration metric: took 41.006668221s for pod "kube-controller-manager-old-k8s-version-947842" in "kube-system" namespace to be "Ready" ...
	I0914 18:28:38.897814  502178 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-5cjbh" in "kube-system" namespace to be "Ready" ...
	I0914 18:28:38.902849  502178 pod_ready.go:93] pod "kube-proxy-5cjbh" in "kube-system" namespace has status "Ready":"True"
	I0914 18:28:38.902876  502178 pod_ready.go:82] duration metric: took 5.054474ms for pod "kube-proxy-5cjbh" in "kube-system" namespace to be "Ready" ...
	I0914 18:28:38.902889  502178 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-947842" in "kube-system" namespace to be "Ready" ...
	I0914 18:28:40.908670  502178 pod_ready.go:103] pod "kube-scheduler-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:42.909450  502178 pod_ready.go:93] pod "kube-scheduler-old-k8s-version-947842" in "kube-system" namespace has status "Ready":"True"
	I0914 18:28:42.909480  502178 pod_ready.go:82] duration metric: took 4.006561622s for pod "kube-scheduler-old-k8s-version-947842" in "kube-system" namespace to be "Ready" ...
	I0914 18:28:42.909493  502178 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace to be "Ready" ...
	I0914 18:28:44.915209  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:46.916770  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:48.916874  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:51.416454  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:53.417039  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:55.915560  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:28:58.415972  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:00.438151  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:02.916961  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:05.415254  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:07.416273  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:09.915559  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:11.915786  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:14.415818  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:16.916653  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:19.416497  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:21.916440  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:24.418539  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:26.916917  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:29.416095  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:31.916367  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:33.947223  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:36.416352  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:38.416396  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:40.916150  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:43.422095  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:45.915138  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:47.916343  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:50.416264  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:52.417781  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:54.915116  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:56.916121  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:29:59.416508  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:01.416800  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:03.915669  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:06.415907  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:08.416069  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:10.916885  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:13.415156  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:15.416417  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:17.416580  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:19.915585  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:22.415506  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:24.418171  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:26.915419  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:28.915521  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:31.416168  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:33.916393  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:36.416492  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:38.915637  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:41.415959  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:43.416007  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:45.915933  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:48.415549  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:50.415669  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:52.420179  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:54.916235  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:57.416438  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:30:59.915998  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:02.416520  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:04.916280  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:07.415729  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:09.415809  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:11.915659  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:13.915806  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:16.416061  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:18.917280  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:21.416178  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:23.915823  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:26.415423  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:28.416270  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:30.416574  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:32.915834  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:35.415880  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:37.915743  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:40.415761  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:42.416535  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:44.915817  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:47.416347  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:49.915086  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:51.915479  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:54.417648  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:56.916008  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:31:59.415685  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:01.418379  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:03.915846  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:05.915903  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:08.415929  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:10.915463  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:13.419346  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:15.446991  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:17.915173  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:19.916521  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:21.916658  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:24.415048  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:26.416396  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:28.416457  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:30.916169  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:33.417446  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:35.417598  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:37.915697  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:40.416354  502178 pod_ready.go:103] pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace has status "Ready":"False"
	I0914 18:32:42.909631  502178 pod_ready.go:82] duration metric: took 4m0.000120978s for pod "metrics-server-9975d5f86-2mxk9" in "kube-system" namespace to be "Ready" ...
	E0914 18:32:42.909666  502178 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I0914 18:32:42.909677  502178 pod_ready.go:39] duration metric: took 5m29.855764614s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0914 18:32:42.909695  502178 api_server.go:52] waiting for apiserver process to appear ...
	I0914 18:32:42.909734  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0914 18:32:42.909804  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0914 18:32:42.949630  502178 cri.go:89] found id: "e27323b296260729fa2687560071d42a4514b4f137953b3d5c8216e446b15cce"
	I0914 18:32:42.949652  502178 cri.go:89] found id: "d37fe18d4149efe0c0585d15624e1b685b673fe1e1318b8d8576a613caea0cd6"
	I0914 18:32:42.949657  502178 cri.go:89] found id: ""
	I0914 18:32:42.949665  502178 logs.go:276] 2 containers: [e27323b296260729fa2687560071d42a4514b4f137953b3d5c8216e446b15cce d37fe18d4149efe0c0585d15624e1b685b673fe1e1318b8d8576a613caea0cd6]
	I0914 18:32:42.949723  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:42.955346  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:42.959033  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0914 18:32:42.959108  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0914 18:32:43.014094  502178 cri.go:89] found id: "5a721ee8fbf23681938715188a720d978d99601339c9b3665e03c525caa65153"
	I0914 18:32:43.014118  502178 cri.go:89] found id: "a0ffc6994576d8808d2bc3654f8e6857aae39c23082101d31933371d294b3a54"
	I0914 18:32:43.014123  502178 cri.go:89] found id: ""
	I0914 18:32:43.014131  502178 logs.go:276] 2 containers: [5a721ee8fbf23681938715188a720d978d99601339c9b3665e03c525caa65153 a0ffc6994576d8808d2bc3654f8e6857aae39c23082101d31933371d294b3a54]
	I0914 18:32:43.014195  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.018148  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.021828  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0914 18:32:43.021905  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0914 18:32:43.064288  502178 cri.go:89] found id: "972b5a39d09086ff8c9f395172f7269a0ca56aaf464d729e10af67d65c6cd864"
	I0914 18:32:43.064312  502178 cri.go:89] found id: "c87aaf8daeeeea40058b74b7d5b95ec822b76e8baa48a3c6471522e852c3c6f1"
	I0914 18:32:43.064318  502178 cri.go:89] found id: ""
	I0914 18:32:43.064325  502178 logs.go:276] 2 containers: [972b5a39d09086ff8c9f395172f7269a0ca56aaf464d729e10af67d65c6cd864 c87aaf8daeeeea40058b74b7d5b95ec822b76e8baa48a3c6471522e852c3c6f1]
	I0914 18:32:43.064382  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.068024  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.071787  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0914 18:32:43.071908  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0914 18:32:43.112434  502178 cri.go:89] found id: "9b8c5a5dbacbe4e27da49330f956e807adc9d438df1cdf63b08ac45205860dba"
	I0914 18:32:43.112495  502178 cri.go:89] found id: "a95b8ab1729a1aa8c7a7c40de867414e37edf9cdc76797052a9cca425d7bca7f"
	I0914 18:32:43.112505  502178 cri.go:89] found id: ""
	I0914 18:32:43.112520  502178 logs.go:276] 2 containers: [9b8c5a5dbacbe4e27da49330f956e807adc9d438df1cdf63b08ac45205860dba a95b8ab1729a1aa8c7a7c40de867414e37edf9cdc76797052a9cca425d7bca7f]
	I0914 18:32:43.112580  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.116419  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.119848  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0914 18:32:43.119974  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0914 18:32:43.168705  502178 cri.go:89] found id: "7360c053a69d5601c51b989f2ac23dd90861afc62b3326c3fbc58a6e719e01e1"
	I0914 18:32:43.168730  502178 cri.go:89] found id: "72df987200414e54ca3d144bf10d64ca95c7922359fc2e2386694ec8173c1e67"
	I0914 18:32:43.168736  502178 cri.go:89] found id: ""
	I0914 18:32:43.168743  502178 logs.go:276] 2 containers: [7360c053a69d5601c51b989f2ac23dd90861afc62b3326c3fbc58a6e719e01e1 72df987200414e54ca3d144bf10d64ca95c7922359fc2e2386694ec8173c1e67]
	I0914 18:32:43.168804  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.172656  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.176219  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0914 18:32:43.176293  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0914 18:32:43.222537  502178 cri.go:89] found id: "212d2e4c98e3b3c1c815131de347b7cfc628a01de49836ecac2a504508d99fb1"
	I0914 18:32:43.222559  502178 cri.go:89] found id: "a29bb42affc63d1ac4d075c5a582547362849f78991e81dccb9e9383a9acc1e6"
	I0914 18:32:43.222564  502178 cri.go:89] found id: ""
	I0914 18:32:43.222571  502178 logs.go:276] 2 containers: [212d2e4c98e3b3c1c815131de347b7cfc628a01de49836ecac2a504508d99fb1 a29bb42affc63d1ac4d075c5a582547362849f78991e81dccb9e9383a9acc1e6]
	I0914 18:32:43.222630  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.226874  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.230460  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kindnet Namespaces:[]}
	I0914 18:32:43.230561  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0914 18:32:43.267512  502178 cri.go:89] found id: "e7cafc47b5d2ef5fcac5bb32122b8aed60c84a4f307e29cb4dd74e2894235bf6"
	I0914 18:32:43.267574  502178 cri.go:89] found id: "1559da34b47646301b138b6609891e4e385b98be7dfb19ec4a58e286b569f3bb"
	I0914 18:32:43.267636  502178 cri.go:89] found id: ""
	I0914 18:32:43.267668  502178 logs.go:276] 2 containers: [e7cafc47b5d2ef5fcac5bb32122b8aed60c84a4f307e29cb4dd74e2894235bf6 1559da34b47646301b138b6609891e4e385b98be7dfb19ec4a58e286b569f3bb]
	I0914 18:32:43.267744  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.271462  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.274970  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0914 18:32:43.275088  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0914 18:32:43.316472  502178 cri.go:89] found id: "baf4a880dcbc428ee2de8ccf87eb5a2ed87629f96295602e31b48fd727df895b"
	I0914 18:32:43.316537  502178 cri.go:89] found id: "e33fb0ef05d96f108d450c97bb5a38a994f0dc7a40c5ebd38be6a3d81bac375d"
	I0914 18:32:43.316557  502178 cri.go:89] found id: ""
	I0914 18:32:43.316569  502178 logs.go:276] 2 containers: [baf4a880dcbc428ee2de8ccf87eb5a2ed87629f96295602e31b48fd727df895b e33fb0ef05d96f108d450c97bb5a38a994f0dc7a40c5ebd38be6a3d81bac375d]
	I0914 18:32:43.316633  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.320454  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.324021  502178 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0914 18:32:43.324137  502178 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0914 18:32:43.363816  502178 cri.go:89] found id: "bd3ccab56e960c40248fc23dd1fe2fe362b664956192709573790aec544544bb"
	I0914 18:32:43.363858  502178 cri.go:89] found id: ""
	I0914 18:32:43.363867  502178 logs.go:276] 1 containers: [bd3ccab56e960c40248fc23dd1fe2fe362b664956192709573790aec544544bb]
	I0914 18:32:43.363941  502178 ssh_runner.go:195] Run: which crictl
	I0914 18:32:43.367839  502178 logs.go:123] Gathering logs for kube-apiserver [d37fe18d4149efe0c0585d15624e1b685b673fe1e1318b8d8576a613caea0cd6] ...
	I0914 18:32:43.367906  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d37fe18d4149efe0c0585d15624e1b685b673fe1e1318b8d8576a613caea0cd6"
	I0914 18:32:43.438408  502178 logs.go:123] Gathering logs for coredns [c87aaf8daeeeea40058b74b7d5b95ec822b76e8baa48a3c6471522e852c3c6f1] ...
	I0914 18:32:43.438445  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c87aaf8daeeeea40058b74b7d5b95ec822b76e8baa48a3c6471522e852c3c6f1"
	I0914 18:32:43.479882  502178 logs.go:123] Gathering logs for kindnet [1559da34b47646301b138b6609891e4e385b98be7dfb19ec4a58e286b569f3bb] ...
	I0914 18:32:43.479908  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1559da34b47646301b138b6609891e4e385b98be7dfb19ec4a58e286b569f3bb"
	I0914 18:32:43.532767  502178 logs.go:123] Gathering logs for kube-apiserver [e27323b296260729fa2687560071d42a4514b4f137953b3d5c8216e446b15cce] ...
	I0914 18:32:43.532816  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e27323b296260729fa2687560071d42a4514b4f137953b3d5c8216e446b15cce"
	I0914 18:32:43.603529  502178 logs.go:123] Gathering logs for kube-controller-manager [a29bb42affc63d1ac4d075c5a582547362849f78991e81dccb9e9383a9acc1e6] ...
	I0914 18:32:43.603565  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a29bb42affc63d1ac4d075c5a582547362849f78991e81dccb9e9383a9acc1e6"
	I0914 18:32:43.671751  502178 logs.go:123] Gathering logs for kindnet [e7cafc47b5d2ef5fcac5bb32122b8aed60c84a4f307e29cb4dd74e2894235bf6] ...
	I0914 18:32:43.671787  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e7cafc47b5d2ef5fcac5bb32122b8aed60c84a4f307e29cb4dd74e2894235bf6"
	I0914 18:32:43.725470  502178 logs.go:123] Gathering logs for kubernetes-dashboard [bd3ccab56e960c40248fc23dd1fe2fe362b664956192709573790aec544544bb] ...
	I0914 18:32:43.725501  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bd3ccab56e960c40248fc23dd1fe2fe362b664956192709573790aec544544bb"
	I0914 18:32:43.772540  502178 logs.go:123] Gathering logs for etcd [5a721ee8fbf23681938715188a720d978d99601339c9b3665e03c525caa65153] ...
	I0914 18:32:43.772573  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a721ee8fbf23681938715188a720d978d99601339c9b3665e03c525caa65153"
	I0914 18:32:43.829502  502178 logs.go:123] Gathering logs for describe nodes ...
	I0914 18:32:43.829531  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0914 18:32:43.981722  502178 logs.go:123] Gathering logs for kube-proxy [7360c053a69d5601c51b989f2ac23dd90861afc62b3326c3fbc58a6e719e01e1] ...
	I0914 18:32:43.981751  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7360c053a69d5601c51b989f2ac23dd90861afc62b3326c3fbc58a6e719e01e1"
	I0914 18:32:44.027878  502178 logs.go:123] Gathering logs for kube-controller-manager [212d2e4c98e3b3c1c815131de347b7cfc628a01de49836ecac2a504508d99fb1] ...
	I0914 18:32:44.027912  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 212d2e4c98e3b3c1c815131de347b7cfc628a01de49836ecac2a504508d99fb1"
	I0914 18:32:44.087185  502178 logs.go:123] Gathering logs for storage-provisioner [baf4a880dcbc428ee2de8ccf87eb5a2ed87629f96295602e31b48fd727df895b] ...
	I0914 18:32:44.087217  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 baf4a880dcbc428ee2de8ccf87eb5a2ed87629f96295602e31b48fd727df895b"
	I0914 18:32:44.126196  502178 logs.go:123] Gathering logs for storage-provisioner [e33fb0ef05d96f108d450c97bb5a38a994f0dc7a40c5ebd38be6a3d81bac375d] ...
	I0914 18:32:44.126276  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 e33fb0ef05d96f108d450c97bb5a38a994f0dc7a40c5ebd38be6a3d81bac375d"
	I0914 18:32:44.164184  502178 logs.go:123] Gathering logs for kubelet ...
	I0914 18:32:44.164212  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0914 18:32:44.219167  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:12 old-k8s-version-947842 kubelet[663]: E0914 18:27:12.982406     663 reflector.go:138] object-"kube-system"/"coredns-token-jvnrz": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-jvnrz" is forbidden: User "system:node:old-k8s-version-947842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-947842' and this object
	W0914 18:32:44.219413  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:12 old-k8s-version-947842 kubelet[663]: E0914 18:27:12.982612     663 reflector.go:138] object-"kube-system"/"kindnet-token-m78v9": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kindnet-token-m78v9" is forbidden: User "system:node:old-k8s-version-947842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-947842' and this object
	W0914 18:32:44.219629  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:12 old-k8s-version-947842 kubelet[663]: E0914 18:27:12.982670     663 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:old-k8s-version-947842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-947842' and this object
	W0914 18:32:44.220126  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:13 old-k8s-version-947842 kubelet[663]: E0914 18:27:13.110439     663 reflector.go:138] object-"kube-system"/"metrics-server-token-c5t7x": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "metrics-server-token-c5t7x" is forbidden: User "system:node:old-k8s-version-947842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-947842' and this object
	W0914 18:32:44.220335  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:13 old-k8s-version-947842 kubelet[663]: E0914 18:27:13.110528     663 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:old-k8s-version-947842" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-947842' and this object
	W0914 18:32:44.220568  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:13 old-k8s-version-947842 kubelet[663]: E0914 18:27:13.110587     663 reflector.go:138] object-"kube-system"/"storage-provisioner-token-vh9zq": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-vh9zq" is forbidden: User "system:node:old-k8s-version-947842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-947842' and this object
	W0914 18:32:44.220781  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:13 old-k8s-version-947842 kubelet[663]: E0914 18:27:13.110689     663 reflector.go:138] object-"default"/"default-token-84t92": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-84t92" is forbidden: User "system:node:old-k8s-version-947842" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'old-k8s-version-947842' and this object
	W0914 18:32:44.220996  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:13 old-k8s-version-947842 kubelet[663]: E0914 18:27:13.110744     663 reflector.go:138] object-"kube-system"/"kube-proxy-token-64nh4": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-64nh4" is forbidden: User "system:node:old-k8s-version-947842" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-947842' and this object
	W0914 18:32:44.228668  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:15 old-k8s-version-947842 kubelet[663]: E0914 18:27:15.676185     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 18:32:44.229489  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:15 old-k8s-version-947842 kubelet[663]: E0914 18:27:15.800173     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.232262  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:28 old-k8s-version-947842 kubelet[663]: E0914 18:27:28.519485     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 18:32:44.234664  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:43 old-k8s-version-947842 kubelet[663]: E0914 18:27:43.031386     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.234850  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:43 old-k8s-version-947842 kubelet[663]: E0914 18:27:43.520855     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.235181  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:44 old-k8s-version-947842 kubelet[663]: E0914 18:27:44.037809     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.235510  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:47 old-k8s-version-947842 kubelet[663]: E0914 18:27:47.018024     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.235961  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:48 old-k8s-version-947842 kubelet[663]: E0914 18:27:48.069076     663 pod_workers.go:191] Error syncing pod fa8b8e41-51a3-48e3-80d5-900e27a86357 ("storage-provisioner_kube-system(fa8b8e41-51a3-48e3-80d5-900e27a86357)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(fa8b8e41-51a3-48e3-80d5-900e27a86357)"
	W0914 18:32:44.238729  502178 logs.go:138] Found kubelet problem: Sep 14 18:27:56 old-k8s-version-947842 kubelet[663]: E0914 18:27:56.507107     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 18:32:44.239320  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:00 old-k8s-version-947842 kubelet[663]: E0914 18:28:00.162261     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.239784  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:07 old-k8s-version-947842 kubelet[663]: E0914 18:28:07.017831     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.239968  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:10 old-k8s-version-947842 kubelet[663]: E0914 18:28:10.504503     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.240554  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:22 old-k8s-version-947842 kubelet[663]: E0914 18:28:22.216534     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.240741  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:24 old-k8s-version-947842 kubelet[663]: E0914 18:28:24.497262     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.241069  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:27 old-k8s-version-947842 kubelet[663]: E0914 18:28:27.017891     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.243541  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:38 old-k8s-version-947842 kubelet[663]: E0914 18:28:38.515976     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 18:32:44.243903  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:40 old-k8s-version-947842 kubelet[663]: E0914 18:28:40.496914     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.244233  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:51 old-k8s-version-947842 kubelet[663]: E0914 18:28:51.496920     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.244421  502178 logs.go:138] Found kubelet problem: Sep 14 18:28:53 old-k8s-version-947842 kubelet[663]: E0914 18:28:53.500933     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.245008  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:05 old-k8s-version-947842 kubelet[663]: E0914 18:29:05.347770     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.245334  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:07 old-k8s-version-947842 kubelet[663]: E0914 18:29:07.018540     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.245517  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:08 old-k8s-version-947842 kubelet[663]: E0914 18:29:08.503907     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.245842  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:17 old-k8s-version-947842 kubelet[663]: E0914 18:29:17.497382     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.246028  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:21 old-k8s-version-947842 kubelet[663]: E0914 18:29:21.497373     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.246354  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:31 old-k8s-version-947842 kubelet[663]: E0914 18:29:31.498416     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.246538  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:35 old-k8s-version-947842 kubelet[663]: E0914 18:29:35.499900     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.246863  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:44 old-k8s-version-947842 kubelet[663]: E0914 18:29:44.496859     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.247047  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:47 old-k8s-version-947842 kubelet[663]: E0914 18:29:47.497294     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.247374  502178 logs.go:138] Found kubelet problem: Sep 14 18:29:57 old-k8s-version-947842 kubelet[663]: E0914 18:29:57.497047     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.249823  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:01 old-k8s-version-947842 kubelet[663]: E0914 18:30:01.506938     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	W0914 18:32:44.250151  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:12 old-k8s-version-947842 kubelet[663]: E0914 18:30:12.496841     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.250335  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:16 old-k8s-version-947842 kubelet[663]: E0914 18:30:16.497255     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.250666  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:24 old-k8s-version-947842 kubelet[663]: E0914 18:30:24.496876     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 1m20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.250849  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:27 old-k8s-version-947842 kubelet[663]: E0914 18:30:27.497713     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.251434  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:37 old-k8s-version-947842 kubelet[663]: E0914 18:30:37.613666     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.251623  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:39 old-k8s-version-947842 kubelet[663]: E0914 18:30:39.497550     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.251955  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:47 old-k8s-version-947842 kubelet[663]: E0914 18:30:47.018300     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.252138  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:54 old-k8s-version-947842 kubelet[663]: E0914 18:30:54.497165     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.252465  502178 logs.go:138] Found kubelet problem: Sep 14 18:30:57 old-k8s-version-947842 kubelet[663]: E0914 18:30:57.497839     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.252654  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:05 old-k8s-version-947842 kubelet[663]: E0914 18:31:05.498341     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.252981  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:09 old-k8s-version-947842 kubelet[663]: E0914 18:31:09.497864     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.253165  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:17 old-k8s-version-947842 kubelet[663]: E0914 18:31:17.497262     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.253493  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:22 old-k8s-version-947842 kubelet[663]: E0914 18:31:22.497240     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.253676  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:29 old-k8s-version-947842 kubelet[663]: E0914 18:31:29.498360     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.254004  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:35 old-k8s-version-947842 kubelet[663]: E0914 18:31:35.497333     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.254188  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:40 old-k8s-version-947842 kubelet[663]: E0914 18:31:40.497245     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.254513  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:49 old-k8s-version-947842 kubelet[663]: E0914 18:31:49.497799     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.254696  502178 logs.go:138] Found kubelet problem: Sep 14 18:31:54 old-k8s-version-947842 kubelet[663]: E0914 18:31:54.497275     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.255025  502178 logs.go:138] Found kubelet problem: Sep 14 18:32:04 old-k8s-version-947842 kubelet[663]: E0914 18:32:04.496848     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.255211  502178 logs.go:138] Found kubelet problem: Sep 14 18:32:07 old-k8s-version-947842 kubelet[663]: E0914 18:32:07.497491     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.255540  502178 logs.go:138] Found kubelet problem: Sep 14 18:32:17 old-k8s-version-947842 kubelet[663]: E0914 18:32:17.497769     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.255728  502178 logs.go:138] Found kubelet problem: Sep 14 18:32:19 old-k8s-version-947842 kubelet[663]: E0914 18:32:19.497750     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.256059  502178 logs.go:138] Found kubelet problem: Sep 14 18:32:30 old-k8s-version-947842 kubelet[663]: E0914 18:32:30.496884     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.256244  502178 logs.go:138] Found kubelet problem: Sep 14 18:32:33 old-k8s-version-947842 kubelet[663]: E0914 18:32:33.498079     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0914 18:32:44.256255  502178 logs.go:123] Gathering logs for etcd [a0ffc6994576d8808d2bc3654f8e6857aae39c23082101d31933371d294b3a54] ...
	I0914 18:32:44.256269  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a0ffc6994576d8808d2bc3654f8e6857aae39c23082101d31933371d294b3a54"
	I0914 18:32:44.298355  502178 logs.go:123] Gathering logs for coredns [972b5a39d09086ff8c9f395172f7269a0ca56aaf464d729e10af67d65c6cd864] ...
	I0914 18:32:44.298434  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 972b5a39d09086ff8c9f395172f7269a0ca56aaf464d729e10af67d65c6cd864"
	I0914 18:32:44.340613  502178 logs.go:123] Gathering logs for kube-scheduler [9b8c5a5dbacbe4e27da49330f956e807adc9d438df1cdf63b08ac45205860dba] ...
	I0914 18:32:44.340650  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9b8c5a5dbacbe4e27da49330f956e807adc9d438df1cdf63b08ac45205860dba"
	I0914 18:32:44.385361  502178 logs.go:123] Gathering logs for kube-scheduler [a95b8ab1729a1aa8c7a7c40de867414e37edf9cdc76797052a9cca425d7bca7f] ...
	I0914 18:32:44.385391  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a95b8ab1729a1aa8c7a7c40de867414e37edf9cdc76797052a9cca425d7bca7f"
	I0914 18:32:44.429688  502178 logs.go:123] Gathering logs for kube-proxy [72df987200414e54ca3d144bf10d64ca95c7922359fc2e2386694ec8173c1e67] ...
	I0914 18:32:44.429719  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 72df987200414e54ca3d144bf10d64ca95c7922359fc2e2386694ec8173c1e67"
	I0914 18:32:44.472616  502178 logs.go:123] Gathering logs for containerd ...
	I0914 18:32:44.472657  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0914 18:32:44.532562  502178 logs.go:123] Gathering logs for container status ...
	I0914 18:32:44.532599  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0914 18:32:44.593349  502178 logs.go:123] Gathering logs for dmesg ...
	I0914 18:32:44.593380  502178 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0914 18:32:44.610514  502178 out.go:358] Setting ErrFile to fd 2...
	I0914 18:32:44.610538  502178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0914 18:32:44.610612  502178 out.go:270] X Problems detected in kubelet:
	X Problems detected in kubelet:
	W0914 18:32:44.610631  502178 out.go:270]   Sep 14 18:32:07 old-k8s-version-947842 kubelet[663]: E0914 18:32:07.497491     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 14 18:32:07 old-k8s-version-947842 kubelet[663]: E0914 18:32:07.497491     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.610671  502178 out.go:270]   Sep 14 18:32:17 old-k8s-version-947842 kubelet[663]: E0914 18:32:17.497769     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	  Sep 14 18:32:17 old-k8s-version-947842 kubelet[663]: E0914 18:32:17.497769     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.610700  502178 out.go:270]   Sep 14 18:32:19 old-k8s-version-947842 kubelet[663]: E0914 18:32:19.497750     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 14 18:32:19 old-k8s-version-947842 kubelet[663]: E0914 18:32:19.497750     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	W0914 18:32:44.610712  502178 out.go:270]   Sep 14 18:32:30 old-k8s-version-947842 kubelet[663]: E0914 18:32:30.496884     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	  Sep 14 18:32:30 old-k8s-version-947842 kubelet[663]: E0914 18:32:30.496884     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	W0914 18:32:44.610724  502178 out.go:270]   Sep 14 18:32:33 old-k8s-version-947842 kubelet[663]: E0914 18:32:33.498079     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	  Sep 14 18:32:33 old-k8s-version-947842 kubelet[663]: E0914 18:32:33.498079     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	I0914 18:32:44.610735  502178 out.go:358] Setting ErrFile to fd 2...
	I0914 18:32:44.610742  502178 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:32:54.611703  502178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:32:54.624970  502178 api_server.go:72] duration metric: took 6m3.066336449s to wait for apiserver process to appear ...
	I0914 18:32:54.625001  502178 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:32:54.627566  502178 out.go:201] 
	W0914 18:32:54.629267  502178 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0914 18:32:54.629289  502178 out.go:270] * 
	* 
	W0914 18:32:54.630242  502178 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:32:54.632619  502178 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-arm64 start -p old-k8s-version-947842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0": exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-947842
helpers_test.go:235: (dbg) docker inspect old-k8s-version-947842:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e5003dd38594a0f19ff076bbbb04573b52bbd762fddff7c472f66100a03e1c4",
	        "Created": "2024-09-14T18:23:57.92754152Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 502481,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-14T18:26:42.969719719Z",
	            "FinishedAt": "2024-09-14T18:26:41.6210443Z"
	        },
	        "Image": "sha256:86ef0f8f97fae81f88ea7ff0848cf3d848f7964ac99ca9c948802eb432bfd351",
	        "ResolvConfPath": "/var/lib/docker/containers/3e5003dd38594a0f19ff076bbbb04573b52bbd762fddff7c472f66100a03e1c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e5003dd38594a0f19ff076bbbb04573b52bbd762fddff7c472f66100a03e1c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e5003dd38594a0f19ff076bbbb04573b52bbd762fddff7c472f66100a03e1c4/hosts",
	        "LogPath": "/var/lib/docker/containers/3e5003dd38594a0f19ff076bbbb04573b52bbd762fddff7c472f66100a03e1c4/3e5003dd38594a0f19ff076bbbb04573b52bbd762fddff7c472f66100a03e1c4-json.log",
	        "Name": "/old-k8s-version-947842",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-947842:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-947842",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9196f7a274d3141df3c095cd94301abe7ff750405f2455a1b12c3c119f167105-init/diff:/var/lib/docker/overlay2/bf50794440da861115e50c5b2a7303272c8b338b643d76ff54196910083f51c0/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9196f7a274d3141df3c095cd94301abe7ff750405f2455a1b12c3c119f167105/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9196f7a274d3141df3c095cd94301abe7ff750405f2455a1b12c3c119f167105/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9196f7a274d3141df3c095cd94301abe7ff750405f2455a1b12c3c119f167105/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-947842",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-947842/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-947842",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-947842",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-947842",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5df6eb103da7dee9fa422ee39e49dbf5ee0ce9f1c5df1df5dc4868d9f7c2d5a",
	            "SandboxKey": "/var/run/docker/netns/d5df6eb103da",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-947842": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "fb412ba63a35a24d24bd0278607247d7b4587943a42163f64b0b74bef9d8716a",
	                    "EndpointID": "edf3394078752ae85ff18e502efb76e85304f987cbe999558a43ec45e7b2cebd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-947842",
	                        "3e5003dd3859"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-947842 -n old-k8s-version-947842
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-947842 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-947842 logs -n 25: (2.673602677s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |         Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	| start   | -p cert-expiration-340657                              | cert-expiration-340657   | jenkins | v1.34.0 | 14 Sep 24 18:22 UTC | 14 Sep 24 18:23 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=3m                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | force-systemd-env-676567                               | force-systemd-env-676567 | jenkins | v1.34.0 | 14 Sep 24 18:23 UTC | 14 Sep 24 18:23 UTC |
	|         | ssh cat                                                |                          |         |         |                     |                     |
	|         | /etc/containerd/config.toml                            |                          |         |         |                     |                     |
	| delete  | -p force-systemd-env-676567                            | force-systemd-env-676567 | jenkins | v1.34.0 | 14 Sep 24 18:23 UTC | 14 Sep 24 18:23 UTC |
	| start   | -p cert-options-553239                                 | cert-options-553239      | jenkins | v1.34.0 | 14 Sep 24 18:23 UTC | 14 Sep 24 18:23 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1                              |                          |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15                          |                          |         |         |                     |                     |
	|         | --apiserver-names=localhost                            |                          |         |         |                     |                     |
	|         | --apiserver-names=www.google.com                       |                          |         |         |                     |                     |
	|         | --apiserver-port=8555                                  |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| ssh     | cert-options-553239 ssh                                | cert-options-553239      | jenkins | v1.34.0 | 14 Sep 24 18:23 UTC | 14 Sep 24 18:23 UTC |
	|         | openssl x509 -text -noout -in                          |                          |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                          |         |         |                     |                     |
	| ssh     | -p cert-options-553239 -- sudo                         | cert-options-553239      | jenkins | v1.34.0 | 14 Sep 24 18:23 UTC | 14 Sep 24 18:23 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                          |         |         |                     |                     |
	| delete  | -p cert-options-553239                                 | cert-options-553239      | jenkins | v1.34.0 | 14 Sep 24 18:23 UTC | 14 Sep 24 18:23 UTC |
	| start   | -p old-k8s-version-947842                              | old-k8s-version-947842   | jenkins | v1.34.0 | 14 Sep 24 18:23 UTC | 14 Sep 24 18:26 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| start   | -p cert-expiration-340657                              | cert-expiration-340657   | jenkins | v1.34.0 | 14 Sep 24 18:26 UTC | 14 Sep 24 18:26 UTC |
	|         | --memory=2048                                          |                          |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-947842        | old-k8s-version-947842   | jenkins | v1.34.0 | 14 Sep 24 18:26 UTC | 14 Sep 24 18:26 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p old-k8s-version-947842                              | old-k8s-version-947842   | jenkins | v1.34.0 | 14 Sep 24 18:26 UTC | 14 Sep 24 18:26 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| delete  | -p cert-expiration-340657                              | cert-expiration-340657   | jenkins | v1.34.0 | 14 Sep 24 18:26 UTC | 14 Sep 24 18:26 UTC |
	| start   | -p no-preload-760354                                   | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:26 UTC | 14 Sep 24 18:27 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-947842             | old-k8s-version-947842   | jenkins | v1.34.0 | 14 Sep 24 18:26 UTC | 14 Sep 24 18:26 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p old-k8s-version-947842                              | old-k8s-version-947842   | jenkins | v1.34.0 | 14 Sep 24 18:26 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --kvm-network=default                                  |                          |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                          |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                          |         |         |                     |                     |
	|         | --keep-context=false                                   |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                          |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-760354             | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:27 UTC | 14 Sep 24 18:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                          |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                          |         |         |                     |                     |
	| stop    | -p no-preload-760354                                   | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:27 UTC | 14 Sep 24 18:28 UTC |
	|         | --alsologtostderr -v=3                                 |                          |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-760354                  | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:28 UTC | 14 Sep 24 18:28 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                          |         |         |                     |                     |
	| start   | -p no-preload-760354                                   | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:28 UTC | 14 Sep 24 18:32 UTC |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr                                      |                          |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                          |         |         |                     |                     |
	|         | --driver=docker                                        |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	| image   | no-preload-760354 image list                           | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:32 UTC | 14 Sep 24 18:32 UTC |
	|         | --format=json                                          |                          |         |         |                     |                     |
	| pause   | -p no-preload-760354                                   | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:32 UTC | 14 Sep 24 18:32 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| unpause | -p no-preload-760354                                   | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:32 UTC | 14 Sep 24 18:32 UTC |
	|         | --alsologtostderr -v=1                                 |                          |         |         |                     |                     |
	| delete  | -p no-preload-760354                                   | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:32 UTC | 14 Sep 24 18:32 UTC |
	| delete  | -p no-preload-760354                                   | no-preload-760354        | jenkins | v1.34.0 | 14 Sep 24 18:32 UTC | 14 Sep 24 18:32 UTC |
	| start   | -p embed-certs-930089                                  | embed-certs-930089       | jenkins | v1.34.0 | 14 Sep 24 18:32 UTC |                     |
	|         | --memory=2200                                          |                          |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                          |         |         |                     |                     |
	|         | --embed-certs --driver=docker                          |                          |         |         |                     |                     |
	|         | --container-runtime=containerd                         |                          |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                           |                          |         |         |                     |                     |
	|---------|--------------------------------------------------------|--------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 18:32:53
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 18:32:53.043115  514401 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:32:53.043290  514401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:32:53.043301  514401 out.go:358] Setting ErrFile to fd 2...
	I0914 18:32:53.043307  514401 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:32:53.043555  514401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 18:32:53.044060  514401 out.go:352] Setting JSON to false
	I0914 18:32:53.045301  514401 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8125,"bootTime":1726330648,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:32:53.045392  514401 start.go:139] virtualization:  
	I0914 18:32:53.048190  514401 out.go:177] * [embed-certs-930089] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 18:32:53.050453  514401 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:32:53.050519  514401 notify.go:220] Checking for updates...
	I0914 18:32:53.054177  514401 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:32:53.056102  514401 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 18:32:53.058238  514401 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	I0914 18:32:53.060032  514401 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:32:53.061877  514401 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:32:53.064356  514401 config.go:182] Loaded profile config "old-k8s-version-947842": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.20.0
	I0914 18:32:53.064449  514401 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:32:53.091870  514401 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 18:32:53.091988  514401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:32:53.158788  514401 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:35 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-14 18:32:53.143235715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 18:32:53.158902  514401 docker.go:318] overlay module found
	I0914 18:32:53.161904  514401 out.go:177] * Using the docker driver based on user configuration
	I0914 18:32:53.163983  514401 start.go:297] selected driver: docker
	I0914 18:32:53.163999  514401 start.go:901] validating driver "docker" against <nil>
	I0914 18:32:53.164013  514401 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:32:53.164647  514401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:32:53.236960  514401 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-14 18:32:53.225892307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 18:32:53.237177  514401 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 18:32:53.237928  514401 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0914 18:32:53.239715  514401 out.go:177] * Using Docker driver with root privileges
	I0914 18:32:53.241615  514401 cni.go:84] Creating CNI manager for ""
	I0914 18:32:53.241683  514401 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 18:32:53.241699  514401 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 18:32:53.241775  514401 start.go:340] cluster config:
	{Name:embed-certs-930089 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-930089 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stati
cIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 18:32:53.243473  514401 out.go:177] * Starting "embed-certs-930089" primary control-plane node in "embed-certs-930089" cluster
	I0914 18:32:53.245179  514401 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 18:32:53.246860  514401 out.go:177] * Pulling base image v0.0.45-1726281268-19643 ...
	I0914 18:32:53.248546  514401 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 18:32:53.248603  514401 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0914 18:32:53.248616  514401 cache.go:56] Caching tarball of preloaded images
	I0914 18:32:53.248639  514401 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local docker daemon
	I0914 18:32:53.248702  514401 preload.go:172] Found /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0914 18:32:53.248712  514401 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0914 18:32:53.248822  514401 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/embed-certs-930089/config.json ...
	I0914 18:32:53.248840  514401 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/embed-certs-930089/config.json: {Name:mk013f737edf5322d48bdea3b46254da904c5789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	W0914 18:32:53.268812  514401 image.go:95] image gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e is of wrong architecture
	I0914 18:32:53.268837  514401 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 18:32:53.268906  514401 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory
	I0914 18:32:53.268931  514401 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory, skipping pull
	I0914 18:32:53.268940  514401 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e exists in cache, skipping pull
	I0914 18:32:53.268949  514401 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e as a tarball
	I0914 18:32:53.268954  514401 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e from local cache
	I0914 18:32:53.402924  514401 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e from cached tarball
	I0914 18:32:53.402962  514401 cache.go:194] Successfully downloaded all kic artifacts
	I0914 18:32:53.402992  514401 start.go:360] acquireMachinesLock for embed-certs-930089: {Name:mk867b809bb9ca1bf7d64283ddd0c09c23fa591c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0914 18:32:53.403122  514401 start.go:364] duration metric: took 109.251µs to acquireMachinesLock for "embed-certs-930089"
	I0914 18:32:53.403155  514401 start.go:93] Provisioning new machine with config: &{Name:embed-certs-930089 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:embed-certs-930089 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0914 18:32:53.403253  514401 start.go:125] createHost starting for "" (driver="docker")
	I0914 18:32:54.611703  502178 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:32:54.624970  502178 api_server.go:72] duration metric: took 6m3.066336449s to wait for apiserver process to appear ...
	I0914 18:32:54.625001  502178 api_server.go:88] waiting for apiserver healthz status ...
	I0914 18:32:54.627566  502178 out.go:201] 
	W0914 18:32:54.629267  502178 out.go:270] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: wait for healthy API server: apiserver healthz never reported healthy: cluster wait timed out during healthz check
	W0914 18:32:54.629289  502178 out.go:270] * 
	W0914 18:32:54.630242  502178 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0914 18:32:54.632619  502178 out.go:201] 
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	5d1ec4622c4aa       523cad1a4df73       2 minutes ago       Exited              dashboard-metrics-scraper   5                   62c395b77a3d3       dashboard-metrics-scraper-8d5bb5db8-zbxkb
	baf4a880dcbc4       ba04bb24b9575       4 minutes ago       Running             storage-provisioner         2                   d7c091f23a58c       storage-provisioner
	bd3ccab56e960       20b332c9a70d8       5 minutes ago       Running             kubernetes-dashboard        0                   41e1cee416a9a       kubernetes-dashboard-cd95d586-ddl7d
	e33fb0ef05d96       ba04bb24b9575       5 minutes ago       Exited              storage-provisioner         1                   d7c091f23a58c       storage-provisioner
	5f1c7830e1b42       1611cd07b61d5       5 minutes ago       Running             busybox                     1                   617a210a7c391       busybox
	972b5a39d0908       db91994f4ee8f       5 minutes ago       Running             coredns                     1                   e1e4cf138bf76       coredns-74ff55c5b-nr99f
	7360c053a69d5       25a5233254979       5 minutes ago       Running             kube-proxy                  1                   07dc05a0e31a3       kube-proxy-5cjbh
	e7cafc47b5d2e       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                 1                   70fe99865230a       kindnet-q4rtq
	9b8c5a5dbacbe       e7605f88f17d6       5 minutes ago       Running             kube-scheduler              1                   b3c9c75a57e19       kube-scheduler-old-k8s-version-947842
	e27323b296260       2c08bbbc02d3a       5 minutes ago       Running             kube-apiserver              1                   f437956f96610       kube-apiserver-old-k8s-version-947842
	5a721ee8fbf23       05b738aa1bc63       5 minutes ago       Running             etcd                        1                   0b7517277f60e       etcd-old-k8s-version-947842
	212d2e4c98e3b       1df8a2b116bd1       5 minutes ago       Running             kube-controller-manager     1                   45c46796db119       kube-controller-manager-old-k8s-version-947842
	5ec43fe6b4ef5       1611cd07b61d5       6 minutes ago       Exited              busybox                     0                   3c60fa7486588       busybox
	c87aaf8daeeee       db91994f4ee8f       7 minutes ago       Exited              coredns                     0                   2d1e31d7bdace       coredns-74ff55c5b-nr99f
	1559da34b4764       6a23fa8fd2b78       8 minutes ago       Exited              kindnet-cni                 0                   8a944bb5e10e2       kindnet-q4rtq
	72df987200414       25a5233254979       8 minutes ago       Exited              kube-proxy                  0                   83f0c684ec058       kube-proxy-5cjbh
	d37fe18d4149e       2c08bbbc02d3a       8 minutes ago       Exited              kube-apiserver              0                   e2227657ee0fb       kube-apiserver-old-k8s-version-947842
	a95b8ab1729a1       e7605f88f17d6       8 minutes ago       Exited              kube-scheduler              0                   8fa3bad827dbb       kube-scheduler-old-k8s-version-947842
	a29bb42affc63       1df8a2b116bd1       8 minutes ago       Exited              kube-controller-manager     0                   657a691c74427       kube-controller-manager-old-k8s-version-947842
	a0ffc6994576d       05b738aa1bc63       8 minutes ago       Exited              etcd                        0                   665a15bfb43f9       etcd-old-k8s-version-947842
	
	
	==> containerd <==
	Sep 14 18:29:04 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:29:04.519424025Z" level=info msg="CreateContainer within sandbox \"62c395b77a3d36e1b28383c5b346a97b064dacd85837b8bccba0d73b91aa34bc\" for name:\"dashboard-metrics-scraper\"  attempt:4 returns container id \"36708198b47a0593c3a27107fc5b580d5884919eb9559e51db266ac7f1693641\""
	Sep 14 18:29:04 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:29:04.520027551Z" level=info msg="StartContainer for \"36708198b47a0593c3a27107fc5b580d5884919eb9559e51db266ac7f1693641\""
	Sep 14 18:29:04 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:29:04.620670606Z" level=info msg="StartContainer for \"36708198b47a0593c3a27107fc5b580d5884919eb9559e51db266ac7f1693641\" returns successfully"
	Sep 14 18:29:04 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:29:04.661819552Z" level=info msg="shim disconnected" id=36708198b47a0593c3a27107fc5b580d5884919eb9559e51db266ac7f1693641 namespace=k8s.io
	Sep 14 18:29:04 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:29:04.661890165Z" level=warning msg="cleaning up after shim disconnected" id=36708198b47a0593c3a27107fc5b580d5884919eb9559e51db266ac7f1693641 namespace=k8s.io
	Sep 14 18:29:04 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:29:04.661900380Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 14 18:29:05 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:29:05.349953352Z" level=info msg="RemoveContainer for \"4dc4955c90894576fc6b9d060b7f14b39a18a39d98c4739704d6b5cc5de4785f\""
	Sep 14 18:29:05 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:29:05.359505411Z" level=info msg="RemoveContainer for \"4dc4955c90894576fc6b9d060b7f14b39a18a39d98c4739704d6b5cc5de4785f\" returns successfully"
	Sep 14 18:30:01 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:01.498002365Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:30:01 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:01.505088553Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 14 18:30:01 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:01.506513673Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 14 18:30:01 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:01.506546017Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	Sep 14 18:30:36 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:36.499085725Z" level=info msg="CreateContainer within sandbox \"62c395b77a3d36e1b28383c5b346a97b064dacd85837b8bccba0d73b91aa34bc\" for container name:\"dashboard-metrics-scraper\"  attempt:5"
	Sep 14 18:30:36 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:36.513614384Z" level=info msg="CreateContainer within sandbox \"62c395b77a3d36e1b28383c5b346a97b064dacd85837b8bccba0d73b91aa34bc\" for name:\"dashboard-metrics-scraper\"  attempt:5 returns container id \"5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49\""
	Sep 14 18:30:36 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:36.514350997Z" level=info msg="StartContainer for \"5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49\""
	Sep 14 18:30:36 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:36.579705627Z" level=info msg="StartContainer for \"5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49\" returns successfully"
	Sep 14 18:30:36 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:36.636213753Z" level=info msg="shim disconnected" id=5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49 namespace=k8s.io
	Sep 14 18:30:36 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:36.636294729Z" level=warning msg="cleaning up after shim disconnected" id=5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49 namespace=k8s.io
	Sep 14 18:30:36 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:36.636304395Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 14 18:30:37 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:37.614889371Z" level=info msg="RemoveContainer for \"36708198b47a0593c3a27107fc5b580d5884919eb9559e51db266ac7f1693641\""
	Sep 14 18:30:37 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:30:37.620957061Z" level=info msg="RemoveContainer for \"36708198b47a0593c3a27107fc5b580d5884919eb9559e51db266ac7f1693641\" returns successfully"
	Sep 14 18:32:47 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:32:47.498040987Z" level=info msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:32:47 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:32:47.504493569Z" level=info msg="trying next host" error="failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host" host=fake.domain
	Sep 14 18:32:47 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:32:47.506171915Z" level=error msg="PullImage \"fake.domain/registry.k8s.io/echoserver:1.4\" failed" error="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 14 18:32:47 old-k8s-version-947842 containerd[568]: time="2024-09-14T18:32:47.506290043Z" level=info msg="stop pulling image fake.domain/registry.k8s.io/echoserver:1.4: active requests=0, bytes read=0"
	
	
	==> coredns [972b5a39d09086ff8c9f395172f7269a0ca56aaf464d729e10af67d65c6cd864] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:41715 - 9536 "HINFO IN 8072375425689032419.239490416998512346. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.043225319s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	I0914 18:27:46.837928       1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-14 18:27:16.837211878 +0000 UTC m=+0.064819037) (total time: 30.000619358s):
	Trace[2019727887]: [30.000619358s] [30.000619358s] END
	E0914 18:27:46.837980       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0914 18:27:46.847830       1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-14 18:27:16.847491385 +0000 UTC m=+0.075098544) (total time: 30.000300869s):
	Trace[939984059]: [30.000300869s] [30.000300869s] END
	E0914 18:27:46.847858       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	I0914 18:27:46.848076       1 trace.go:116] Trace[1474941318]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2024-09-14 18:27:16.847898432 +0000 UTC m=+0.075505583) (total time: 30.000165181s):
	Trace[1474941318]: [30.000165181s] [30.000165181s] END
	E0914 18:27:46.848082       1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [c87aaf8daeeeea40058b74b7d5b95ec822b76e8baa48a3c6471522e852c3c6f1] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = b494d968e357ba1b925cee838fbd78ed
	CoreDNS-1.7.0
	linux/arm64, go1.14.4, f59c03d
	[INFO] 127.0.0.1:59171 - 62854 "HINFO IN 449701432549218960.4727377929580213325. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.015318488s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-947842
	Roles:              control-plane,master
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-947842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fbeeb744274463b05401c917e5ab21bbaf5ef95a
	                    minikube.k8s.io/name=old-k8s-version-947842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_14T18_24_36_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 14 Sep 2024 18:24:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-947842
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 14 Sep 2024 18:32:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 14 Sep 2024 18:28:03 +0000   Sat, 14 Sep 2024 18:24:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 14 Sep 2024 18:28:03 +0000   Sat, 14 Sep 2024 18:24:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 14 Sep 2024 18:28:03 +0000   Sat, 14 Sep 2024 18:24:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 14 Sep 2024 18:28:03 +0000   Sat, 14 Sep 2024 18:24:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-947842
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 f7550ba5df88424ea31bc0459a20ba49
	  System UUID:                72ed495a-330f-43f0-9f8b-2b0b3a166daf
	  Boot ID:                    35fd0b1a-e7ce-4152-9f40-0c82d6bd6d43
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.20.0
	  Kube-Proxy Version:         v1.20.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m38s
	  kube-system                 coredns-74ff55c5b-nr99f                           100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m5s
	  kube-system                 etcd-old-k8s-version-947842                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m12s
	  kube-system                 kindnet-q4rtq                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m5s
	  kube-system                 kube-apiserver-old-k8s-version-947842             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-controller-manager-old-k8s-version-947842    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 kube-proxy-5cjbh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m5s
	  kube-system                 kube-scheduler-old-k8s-version-947842             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m12s
	  kube-system                 metrics-server-9975d5f86-2mxk9                    100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m26s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  kubernetes-dashboard        dashboard-metrics-scraper-8d5bb5db8-zbxkb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	  kubernetes-dashboard        kubernetes-dashboard-cd95d586-ddl7d               0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  100Mi (0%)  0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 8m12s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m12s                  kubelet     Node old-k8s-version-947842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m12s                  kubelet     Node old-k8s-version-947842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m12s                  kubelet     Node old-k8s-version-947842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m12s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                8m5s                   kubelet     Node old-k8s-version-947842 status is now: NodeReady
	  Normal  Starting                 8m4s                   kube-proxy  Starting kube-proxy.
	  Normal  Starting                 5m57s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m57s (x8 over 5m57s)  kubelet     Node old-k8s-version-947842 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m57s (x8 over 5m57s)  kubelet     Node old-k8s-version-947842 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m57s (x7 over 5m57s)  kubelet     Node old-k8s-version-947842 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m57s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 5m39s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Sep14 17:07] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [5a721ee8fbf23681938715188a720d978d99601339c9b3665e03c525caa65153] <==
	2024-09-14 18:28:54.190877 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:29:04.190899 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:29:14.190868 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:29:24.190824 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:29:34.190879 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:29:44.190809 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:29:54.190946 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:30:04.191010 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:30:14.191029 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:30:24.190829 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:30:34.190843 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:30:44.190806 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:30:54.190962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:31:04.190796 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:31:14.191044 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:31:24.190887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:31:34.190893 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:31:44.191018 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:31:54.190960 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:32:04.190702 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:32:14.190812 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:32:24.190980 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:32:34.190818 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:32:44.191523 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:32:54.190872 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> etcd [a0ffc6994576d8808d2bc3654f8e6857aae39c23082101d31933371d294b3a54] <==
	raft2024/09/14 18:24:26 INFO: ea7e25599daad906 switched to configuration voters=(16896983918768216326)
	2024-09-14 18:24:26.137894 I | etcdserver/membership: added member ea7e25599daad906 [https://192.168.76.2:2380] to cluster 6f20f2c4b2fb5f8a
	raft2024/09/14 18:24:27 INFO: ea7e25599daad906 is starting a new election at term 1
	raft2024/09/14 18:24:27 INFO: ea7e25599daad906 became candidate at term 2
	raft2024/09/14 18:24:27 INFO: ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2
	raft2024/09/14 18:24:27 INFO: ea7e25599daad906 became leader at term 2
	raft2024/09/14 18:24:27 INFO: raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2
	2024-09-14 18:24:27.026868 I | etcdserver: setting up the initial cluster version to 3.4
	2024-09-14 18:24:27.031753 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-09-14 18:24:27.031993 I | etcdserver/api: enabled capabilities for version 3.4
	2024-09-14 18:24:27.032092 I | etcdserver: published {Name:old-k8s-version-947842 ClientURLs:[https://192.168.76.2:2379]} to cluster 6f20f2c4b2fb5f8a
	2024-09-14 18:24:27.032330 I | embed: ready to serve client requests
	2024-09-14 18:24:27.032733 I | embed: ready to serve client requests
	2024-09-14 18:24:27.034886 I | embed: serving client requests on 192.168.76.2:2379
	2024-09-14 18:24:27.042835 I | embed: serving client requests on 127.0.0.1:2379
	2024-09-14 18:24:52.287920 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:25:00.162182 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:25:10.161446 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:25:20.161452 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:25:30.161645 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:25:40.161385 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:25:50.162461 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:26:00.183196 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:26:10.161607 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	2024-09-14 18:26:20.161464 I | etcdserver/api/etcdhttp: /health OK (status code 200)
	
	
	==> kernel <==
	 18:32:56 up  2:15,  0 users,  load average: 1.32, 1.84, 2.40
	Linux old-k8s-version-947842 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [1559da34b47646301b138b6609891e4e385b98be7dfb19ec4a58e286b569f3bb] <==
	I0914 18:24:54.302045       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I0914 18:24:54.601530       1 controller.go:334] Starting controller kube-network-policies
	I0914 18:24:54.601548       1 controller.go:338] Waiting for informer caches to sync
	I0914 18:24:54.601553       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I0914 18:24:54.801752       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I0914 18:24:54.802072       1 metrics.go:61] Registering metrics
	I0914 18:24:54.802228       1 controller.go:374] Syncing nftables rules
	I0914 18:25:04.605461       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:25:04.605536       1 main.go:299] handling current node
	I0914 18:25:14.600422       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:25:14.600456       1 main.go:299] handling current node
	I0914 18:25:24.608998       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:25:24.609032       1 main.go:299] handling current node
	I0914 18:25:34.608678       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:25:34.608711       1 main.go:299] handling current node
	I0914 18:25:44.600844       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:25:44.600875       1 main.go:299] handling current node
	I0914 18:25:54.600579       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:25:54.600790       1 main.go:299] handling current node
	I0914 18:26:04.607768       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:26:04.607802       1 main.go:299] handling current node
	I0914 18:26:14.604768       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:26:14.604801       1 main.go:299] handling current node
	I0914 18:26:24.599937       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:26:24.599981       1 main.go:299] handling current node
	
	
	==> kindnet [e7cafc47b5d2ef5fcac5bb32122b8aed60c84a4f307e29cb4dd74e2894235bf6] <==
	I0914 18:30:56.454548       1 main.go:299] handling current node
	I0914 18:31:06.455666       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:31:06.455708       1 main.go:299] handling current node
	I0914 18:31:16.447967       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:31:16.448026       1 main.go:299] handling current node
	I0914 18:31:26.455163       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:31:26.455198       1 main.go:299] handling current node
	I0914 18:31:36.456362       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:31:36.456400       1 main.go:299] handling current node
	I0914 18:31:46.455830       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:31:46.455867       1 main.go:299] handling current node
	I0914 18:31:56.447198       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:31:56.447408       1 main.go:299] handling current node
	I0914 18:32:06.455168       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:32:06.455208       1 main.go:299] handling current node
	I0914 18:32:16.447737       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:32:16.447785       1 main.go:299] handling current node
	I0914 18:32:26.455144       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:32:26.455177       1 main.go:299] handling current node
	I0914 18:32:36.455736       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:32:36.455773       1 main.go:299] handling current node
	I0914 18:32:46.456712       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:32:46.456747       1 main.go:299] handling current node
	I0914 18:32:56.453495       1 main.go:295] Handling node with IPs: map[192.168.76.2:{}]
	I0914 18:32:56.453531       1 main.go:299] handling current node
	
	
	==> kube-apiserver [d37fe18d4149efe0c0585d15624e1b685b673fe1e1318b8d8576a613caea0cd6] <==
	I0914 18:24:33.428465       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0914 18:24:33.428639       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0914 18:24:33.443464       1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000
	I0914 18:24:33.448949       1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000
	I0914 18:24:33.448970       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
	I0914 18:24:33.893760       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0914 18:24:33.932596       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0914 18:24:34.084864       1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0914 18:24:34.085968       1 controller.go:606] quota admission added evaluator for: endpoints
	I0914 18:24:34.090478       1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0914 18:24:35.068750       1 controller.go:606] quota admission added evaluator for: serviceaccounts
	I0914 18:24:35.534537       1 controller.go:606] quota admission added evaluator for: deployments.apps
	I0914 18:24:35.647014       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
	I0914 18:24:44.151889       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
	I0914 18:24:51.112155       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
	I0914 18:24:51.114700       1 controller.go:606] quota admission added evaluator for: replicasets.apps
	I0914 18:24:59.295657       1 client.go:360] parsed scheme: "passthrough"
	I0914 18:24:59.297506       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:24:59.297547       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 18:25:41.266195       1 client.go:360] parsed scheme: "passthrough"
	I0914 18:25:41.266249       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:25:41.266414       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 18:26:21.233174       1 client.go:360] parsed scheme: "passthrough"
	I0914 18:26:21.233233       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:26:21.233241       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-apiserver [e27323b296260729fa2687560071d42a4514b4f137953b3d5c8216e446b15cce] <==
	I0914 18:29:57.914731       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:29:57.914819       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0914 18:30:16.906995       1 handler_proxy.go:102] no RequestInfo found in the context
	E0914 18:30:16.907074       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 18:30:16.907091       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:30:34.767808       1 client.go:360] parsed scheme: "passthrough"
	I0914 18:30:34.767852       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:30:34.767861       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 18:31:08.793800       1 client.go:360] parsed scheme: "passthrough"
	I0914 18:31:08.793848       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:31:08.793856       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 18:31:39.210945       1 client.go:360] parsed scheme: "passthrough"
	I0914 18:31:39.210989       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:31:39.210998       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	W0914 18:32:14.031470       1 handler_proxy.go:102] no RequestInfo found in the context
	E0914 18:32:14.031552       1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0914 18:32:14.031565       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0914 18:32:21.525141       1 client.go:360] parsed scheme: "passthrough"
	I0914 18:32:21.525186       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:32:21.525218       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	I0914 18:32:56.697213       1 client.go:360] parsed scheme: "passthrough"
	I0914 18:32:56.697262       1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
	I0914 18:32:56.697271       1 clientconn.go:948] ClientConn switching balancer to "pick_first"
	
	
	==> kube-controller-manager [212d2e4c98e3b3c1c815131de347b7cfc628a01de49836ecac2a504508d99fb1] <==
	E0914 18:28:33.711046       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 18:28:37.314504       1 request.go:655] Throttling request took 1.048399987s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 18:28:38.166286       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 18:29:04.213120       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 18:29:09.816680       1 request.go:655] Throttling request took 1.048287484s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 18:29:10.668274       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 18:29:34.714963       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 18:29:42.318667       1 request.go:655] Throttling request took 1.048203656s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 18:29:43.170083       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 18:30:05.216883       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 18:30:14.820686       1 request.go:655] Throttling request took 1.048600056s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 18:30:15.672120       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 18:30:35.718658       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 18:30:47.322524       1 request.go:655] Throttling request took 1.048541399s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0914 18:30:48.174023       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 18:31:06.220472       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 18:31:19.824557       1 request.go:655] Throttling request took 1.048372507s, request: GET:https://192.168.76.2:8443/apis/apiregistration.k8s.io/v1beta1?timeout=32s
	W0914 18:31:20.675941       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 18:31:36.722312       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 18:31:52.326460       1 request.go:655] Throttling request took 1.048416463s, request: GET:https://192.168.76.2:8443/apis/authorization.k8s.io/v1beta1?timeout=32s
	W0914 18:31:53.177905       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 18:32:07.224358       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	I0914 18:32:24.828358       1 request.go:655] Throttling request took 1.048156099s, request: GET:https://192.168.76.2:8443/apis/extensions/v1beta1?timeout=32s
	W0914 18:32:25.679953       1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0914 18:32:37.726732       1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-controller-manager [a29bb42affc63d1ac4d075c5a582547362849f78991e81dccb9e9383a9acc1e6] <==
	I0914 18:24:51.163111       1 shared_informer.go:247] Caches are synced for ReplicaSet 
	I0914 18:24:51.183252       1 event.go:291] "Event occurred" object="kube-system/kindnet" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-q4rtq"
	I0914 18:24:51.183396       1 shared_informer.go:247] Caches are synced for endpoint 
	I0914 18:24:51.190344       1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
	I0914 18:24:51.208448       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5cjbh"
	I0914 18:24:51.248435       1 range_allocator.go:373] Set node old-k8s-version-947842 PodCIDR to [10.244.0.0/24]
	I0914 18:24:51.249264       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9phk5"
	I0914 18:24:51.283660       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
	I0914 18:24:51.315704       1 shared_informer.go:247] Caches are synced for attach detach 
	I0914 18:24:51.377700       1 shared_informer.go:247] Caches are synced for resource quota 
	I0914 18:24:51.380800       1 shared_informer.go:247] Caches are synced for disruption 
	I0914 18:24:51.380818       1 disruption.go:339] Sending events to api server.
	I0914 18:24:51.381340       1 shared_informer.go:247] Caches are synced for stateful set 
	I0914 18:24:51.441826       1 shared_informer.go:247] Caches are synced for resource quota 
	I0914 18:24:51.478302       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-nr99f"
	E0914 18:24:51.590999       1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"b2926dc6-6700-46bf-80ce-f606e98e4273", ResourceVersion:"377", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63861935075, loc:(*time.Location)(0x632eb80)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400147c4e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400147c540)}, v1.ManagedFieldsEntry{Manager:"kube-co
ntroller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0x400147c5a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0x400147c600)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0x400147c660), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElastic
BlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0x4001ef6ec0), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSour
ce)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400147c6c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSo
urce)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0x400147c720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil),
Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil),
WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0x400147c7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"F
ile", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0x4001e712c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x400107e7b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x40004107e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)
(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0x4001c14198)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0x400107e808)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest ve
rsion and try again
	I0914 18:24:51.856767       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
	I0914 18:24:51.907942       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0914 18:24:51.907964       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0914 18:24:51.956880       1 shared_informer.go:247] Caches are synced for garbage collector 
	I0914 18:24:52.682893       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
	I0914 18:24:52.823900       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-9phk5"
	I0914 18:24:56.079179       1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode.
	I0914 18:26:29.144607       1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-9975d5f86 to 1"
	E0914 18:26:29.262712       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [72df987200414e54ca3d144bf10d64ca95c7922359fc2e2386694ec8173c1e67] <==
	I0914 18:24:52.165597       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0914 18:24:52.165686       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0914 18:24:52.202462       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0914 18:24:52.202547       1 server_others.go:185] Using iptables Proxier.
	I0914 18:24:52.202750       1 server.go:650] Version: v1.20.0
	I0914 18:24:52.203226       1 config.go:315] Starting service config controller
	I0914 18:24:52.203235       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0914 18:24:52.205402       1 config.go:224] Starting endpoint slice config controller
	I0914 18:24:52.205414       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0914 18:24:52.303357       1 shared_informer.go:247] Caches are synced for service config 
	I0914 18:24:52.305548       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-proxy [7360c053a69d5601c51b989f2ac23dd90861afc62b3326c3fbc58a6e719e01e1] <==
	I0914 18:27:17.070423       1 node.go:172] Successfully retrieved node IP: 192.168.76.2
	I0914 18:27:17.070506       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.76.2), assume IPv4 operation
	W0914 18:27:17.129906       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
	I0914 18:27:17.130210       1 server_others.go:185] Using iptables Proxier.
	I0914 18:27:17.130779       1 server.go:650] Version: v1.20.0
	I0914 18:27:17.133220       1 config.go:315] Starting service config controller
	I0914 18:27:17.151568       1 shared_informer.go:240] Waiting for caches to sync for service config
	I0914 18:27:17.145015       1 config.go:224] Starting endpoint slice config controller
	I0914 18:27:17.159192       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
	I0914 18:27:17.251983       1 shared_informer.go:247] Caches are synced for service config 
	I0914 18:27:17.259403       1 shared_informer.go:247] Caches are synced for endpoint slice config 
	
	
	==> kube-scheduler [9b8c5a5dbacbe4e27da49330f956e807adc9d438df1cdf63b08ac45205860dba] <==
	I0914 18:27:05.239176       1 serving.go:331] Generated self-signed cert in-memory
	W0914 18:27:12.970490       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0914 18:27:12.970521       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0914 18:27:12.970529       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 18:27:12.970543       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 18:27:13.130758       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0914 18:27:13.154648       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	I0914 18:27:13.154720       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:27:13.157514       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:27:13.257879       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kube-scheduler [a95b8ab1729a1aa8c7a7c40de867414e37edf9cdc76797052a9cca425d7bca7f] <==
	W0914 18:24:32.578542       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0914 18:24:32.578616       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0914 18:24:32.649183       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
	I0914 18:24:32.651440       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:24:32.651473       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0914 18:24:32.651495       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0914 18:24:32.656353       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 18:24:32.660731       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 18:24:32.661050       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0914 18:24:32.661462       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0914 18:24:32.663369       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0914 18:24:32.663810       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:24:32.663884       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 18:24:32.663940       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0914 18:24:32.663996       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 18:24:32.664047       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0914 18:24:32.664102       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0914 18:24:32.664162       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0914 18:24:33.482786       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0914 18:24:33.586115       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0914 18:24:33.628635       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0914 18:24:33.641155       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0914 18:24:33.676061       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0914 18:24:33.688777       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0914 18:24:35.955282       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	Sep 14 18:31:17 old-k8s-version-947842 kubelet[663]: E0914 18:31:17.497262     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:31:22 old-k8s-version-947842 kubelet[663]: I0914 18:31:22.496446     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49
	Sep 14 18:31:22 old-k8s-version-947842 kubelet[663]: E0914 18:31:22.497240     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	Sep 14 18:31:29 old-k8s-version-947842 kubelet[663]: E0914 18:31:29.498360     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:31:35 old-k8s-version-947842 kubelet[663]: I0914 18:31:35.496525     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49
	Sep 14 18:31:35 old-k8s-version-947842 kubelet[663]: E0914 18:31:35.497333     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	Sep 14 18:31:40 old-k8s-version-947842 kubelet[663]: E0914 18:31:40.497245     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:31:49 old-k8s-version-947842 kubelet[663]: I0914 18:31:49.496922     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49
	Sep 14 18:31:49 old-k8s-version-947842 kubelet[663]: E0914 18:31:49.497799     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	Sep 14 18:31:54 old-k8s-version-947842 kubelet[663]: E0914 18:31:54.497275     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:32:04 old-k8s-version-947842 kubelet[663]: I0914 18:32:04.496481     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49
	Sep 14 18:32:04 old-k8s-version-947842 kubelet[663]: E0914 18:32:04.496848     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	Sep 14 18:32:07 old-k8s-version-947842 kubelet[663]: E0914 18:32:07.497491     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:32:17 old-k8s-version-947842 kubelet[663]: I0914 18:32:17.496600     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49
	Sep 14 18:32:17 old-k8s-version-947842 kubelet[663]: E0914 18:32:17.497769     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	Sep 14 18:32:19 old-k8s-version-947842 kubelet[663]: E0914 18:32:19.497750     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:32:30 old-k8s-version-947842 kubelet[663]: I0914 18:32:30.496488     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49
	Sep 14 18:32:30 old-k8s-version-947842 kubelet[663]: E0914 18:32:30.496884     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	Sep 14 18:32:33 old-k8s-version-947842 kubelet[663]: E0914 18:32:33.498079     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 14 18:32:45 old-k8s-version-947842 kubelet[663]: I0914 18:32:45.496563     663 scope.go:95] [topologymanager] RemoveContainer - Container ID: 5d1ec4622c4aaa9242035f3e32d5788774c045b33b4248468b558b8acdf7fb49
	Sep 14 18:32:45 old-k8s-version-947842 kubelet[663]: E0914 18:32:45.497027     663 pod_workers.go:191] Error syncing pod 52f1e319-78d9-449f-b4aa-90f053589c24 ("dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "back-off 2m40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-8d5bb5db8-zbxkb_kubernetes-dashboard(52f1e319-78d9-449f-b4aa-90f053589c24)"
	Sep 14 18:32:47 old-k8s-version-947842 kubelet[663]: E0914 18:32:47.506493     663 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 14 18:32:47 old-k8s-version-947842 kubelet[663]: E0914 18:32:47.506559     663 kuberuntime_image.go:51] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 14 18:32:47 old-k8s-version-947842 kubelet[663]: E0914 18:32:47.506923     663 kuberuntime_manager.go:829] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:metrics-server-token-c5t7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Handler:Handler{Exec
:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod metrics-server-9975d5f86-2mxk9_kube-system(aecf219
2-b7b6-4826-9f1a-54957e033b72): ErrImagePull: rpc error: code = Unknown desc = failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host
	Sep 14 18:32:47 old-k8s-version-947842 kubelet[663]: E0914 18:32:47.507083     663 pod_workers.go:191] Error syncing pod aecf2192-b7b6-4826-9f1a-54957e033b72 ("metrics-server-9975d5f86-2mxk9_kube-system(aecf2192-b7b6-4826-9f1a-54957e033b72)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> kubernetes-dashboard [bd3ccab56e960c40248fc23dd1fe2fe362b664956192709573790aec544544bb] <==
	2024/09/14 18:27:37 Using namespace: kubernetes-dashboard
	2024/09/14 18:27:37 Using in-cluster config to connect to apiserver
	2024/09/14 18:27:37 Using secret token for csrf signing
	2024/09/14 18:27:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/09/14 18:27:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/09/14 18:27:37 Successful initial request to the apiserver, version: v1.20.0
	2024/09/14 18:27:37 Generating JWE encryption key
	2024/09/14 18:27:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/09/14 18:27:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/09/14 18:27:37 Initializing JWE encryption key from synchronized object
	2024/09/14 18:27:37 Creating in-cluster Sidecar client
	2024/09/14 18:27:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:27:37 Serving insecurely on HTTP port: 9090
	2024/09/14 18:28:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:28:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:29:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:29:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:30:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:30:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:31:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:31:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:32:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:32:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/09/14 18:27:37 Starting overwatch
	
	
	==> storage-provisioner [baf4a880dcbc428ee2de8ccf87eb5a2ed87629f96295602e31b48fd727df895b] <==
	I0914 18:28:01.603226       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0914 18:28:01.628617       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0914 18:28:01.628688       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0914 18:28:19.176979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0914 18:28:19.182729       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-947842_8bf5d93a-3fac-428b-91f1-6cdff58d0b16!
	I0914 18:28:19.188852       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0a26f3dc-7d02-409c-b6b9-5da13cb4bd66", APIVersion:"v1", ResourceVersion:"828", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-947842_8bf5d93a-3fac-428b-91f1-6cdff58d0b16 became leader
	I0914 18:28:19.283282       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-947842_8bf5d93a-3fac-428b-91f1-6cdff58d0b16!
	
	
	==> storage-provisioner [e33fb0ef05d96f108d450c97bb5a38a994f0dc7a40c5ebd38be6a3d81bac375d] <==
	I0914 18:27:17.394518       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0914 18:27:47.396988       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-947842 -n old-k8s-version-947842
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-947842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-9975d5f86-2mxk9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-947842 describe pod metrics-server-9975d5f86-2mxk9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-947842 describe pod metrics-server-9975d5f86-2mxk9: exit status 1 (89.866661ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-9975d5f86-2mxk9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-947842 describe pod metrics-server-9975d5f86-2mxk9: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (376.23s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 9.15
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.1/json-events 5.74
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 217.15
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 16.94
34 TestAddons/parallel/Ingress 20.8
35 TestAddons/parallel/InspektorGadget 11.01
36 TestAddons/parallel/MetricsServer 7.05
39 TestAddons/parallel/CSI 41.35
40 TestAddons/parallel/Headlamp 16.91
41 TestAddons/parallel/CloudSpanner 6.65
42 TestAddons/parallel/LocalPath 9.75
43 TestAddons/parallel/NvidiaDevicePlugin 5.93
44 TestAddons/parallel/Yakd 11.84
45 TestAddons/StoppedEnableDisable 12.37
46 TestCertOptions 37.06
47 TestCertExpiration 228.53
49 TestForceSystemdFlag 45.22
50 TestForceSystemdEnv 42.28
51 TestDockerEnvContainerd 46.97
56 TestErrorSpam/setup 28.55
57 TestErrorSpam/start 0.76
58 TestErrorSpam/status 1.07
59 TestErrorSpam/pause 1.76
60 TestErrorSpam/unpause 1.89
61 TestErrorSpam/stop 1.72
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 47.8
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.57
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.16
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.32
73 TestFunctional/serial/CacheCmd/cache/add_local 1.27
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.06
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.07
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.15
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 46.48
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.75
84 TestFunctional/serial/LogsFileCmd 1.78
85 TestFunctional/serial/InvalidService 4.24
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 14.1
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.28
91 TestFunctional/parallel/StatusCmd 1.03
95 TestFunctional/parallel/ServiceCmdConnect 7.68
96 TestFunctional/parallel/AddonsCmd 0.13
97 TestFunctional/parallel/PersistentVolumeClaim 23.73
99 TestFunctional/parallel/SSHCmd 0.58
100 TestFunctional/parallel/CpCmd 2.01
102 TestFunctional/parallel/FileSync 0.4
103 TestFunctional/parallel/CertSync 2.12
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.71
111 TestFunctional/parallel/License 0.25
112 TestFunctional/parallel/Version/short 0.08
113 TestFunctional/parallel/Version/components 1.29
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.63
119 TestFunctional/parallel/ImageCommands/Setup 0.74
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.24
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.52
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.46
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.31
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.41
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.3
136 TestFunctional/parallel/ServiceCmd/List 0.33
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
139 TestFunctional/parallel/ServiceCmd/Format 0.39
140 TestFunctional/parallel/ServiceCmd/URL 0.37
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
148 TestFunctional/parallel/ProfileCmd/profile_list 0.44
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
150 TestFunctional/parallel/MountCmd/any-port 8.01
151 TestFunctional/parallel/MountCmd/specific-port 1.96
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.25
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 123.31
160 TestMultiControlPlane/serial/DeployApp 29.11
161 TestMultiControlPlane/serial/PingHostFromPods 1.62
162 TestMultiControlPlane/serial/AddWorkerNode 21.46
163 TestMultiControlPlane/serial/NodeLabels 0.13
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.79
165 TestMultiControlPlane/serial/CopyFile 19.44
166 TestMultiControlPlane/serial/StopSecondaryNode 12.99
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.56
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.95
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 154.23
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.5
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.57
173 TestMultiControlPlane/serial/StopCluster 36.18
174 TestMultiControlPlane/serial/RestartCluster 78.9
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
176 TestMultiControlPlane/serial/AddSecondaryNode 44.67
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
181 TestJSONOutput/start/Command 51.97
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.75
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.79
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 36.8
207 TestKicCustomNetwork/use_default_bridge_network 32.13
208 TestKicExistingNetwork 35.79
209 TestKicCustomSubnet 32.49
210 TestKicStaticIP 32.44
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 69.9
215 TestMountStart/serial/StartWithMountFirst 6.45
216 TestMountStart/serial/VerifyMountFirst 0.27
217 TestMountStart/serial/StartWithMountSecond 6.28
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.62
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.22
222 TestMountStart/serial/RestartStopped 7.47
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 66.18
227 TestMultiNode/serial/DeployApp2Nodes 18.18
228 TestMultiNode/serial/PingHostFrom2Pods 0.99
229 TestMultiNode/serial/AddNode 20.58
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.34
232 TestMultiNode/serial/CopyFile 10.3
233 TestMultiNode/serial/StopNode 2.29
234 TestMultiNode/serial/StartAfterStop 10.06
235 TestMultiNode/serial/RestartKeepsNodes 103
236 TestMultiNode/serial/DeleteNode 5.53
237 TestMultiNode/serial/StopMultiNode 23.96
238 TestMultiNode/serial/RestartMultiNode 48.25
239 TestMultiNode/serial/ValidateNameConflict 36.28
244 TestPreload 114.39
246 TestScheduledStopUnix 106.19
249 TestInsufficientStorage 10.35
250 TestRunningBinaryUpgrade 82.26
252 TestKubernetesUpgrade 361.03
253 TestMissingContainerUpgrade 179.51
255 TestPause/serial/Start 87.33
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
258 TestNoKubernetes/serial/StartWithK8s 40.55
259 TestNoKubernetes/serial/StartWithStopK8s 18.2
260 TestNoKubernetes/serial/Start 5.71
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.26
262 TestNoKubernetes/serial/ProfileList 0.98
263 TestNoKubernetes/serial/Stop 1.2
264 TestNoKubernetes/serial/StartNoArgs 6.42
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
266 TestPause/serial/SecondStartNoReconfiguration 6.01
267 TestPause/serial/Pause 0.87
268 TestPause/serial/VerifyStatus 0.45
269 TestPause/serial/Unpause 1.17
270 TestPause/serial/PauseAgain 1.28
271 TestPause/serial/DeletePaused 3.18
272 TestPause/serial/VerifyDeletedResources 0.17
273 TestStoppedBinaryUpgrade/Setup 0.72
274 TestStoppedBinaryUpgrade/Upgrade 107.77
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.03
290 TestNetworkPlugins/group/false 4.48
295 TestStartStop/group/old-k8s-version/serial/FirstStart 148.18
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.6
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.4
298 TestStartStop/group/old-k8s-version/serial/Stop 12.42
300 TestStartStop/group/no-preload/serial/FirstStart 67.57
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.3
303 TestStartStop/group/no-preload/serial/DeployApp 8.37
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
305 TestStartStop/group/no-preload/serial/Stop 12.09
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
307 TestStartStop/group/no-preload/serial/SecondStart 267.6
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/no-preload/serial/Pause 3.18
313 TestStartStop/group/embed-certs/serial/FirstStart 85.74
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.13
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
317 TestStartStop/group/old-k8s-version/serial/Pause 4.21
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.32
320 TestStartStop/group/embed-certs/serial/DeployApp 10.34
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.2
322 TestStartStop/group/embed-certs/serial/Stop 12.17
323 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.35
324 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
325 TestStartStop/group/embed-certs/serial/SecondStart 266.97
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.4
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.58
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.45
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.12
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.33
333 TestStartStop/group/embed-certs/serial/Pause 3.38
335 TestStartStop/group/newest-cni/serial/FirstStart 38.8
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.13
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.99
340 TestNetworkPlugins/group/auto/Start 101.06
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.62
343 TestStartStop/group/newest-cni/serial/Stop 1.34
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
345 TestStartStop/group/newest-cni/serial/SecondStart 22
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
349 TestStartStop/group/newest-cni/serial/Pause 3.58
350 TestNetworkPlugins/group/flannel/Start 50.26
351 TestNetworkPlugins/group/flannel/ControllerPod 6.01
352 TestNetworkPlugins/group/auto/KubeletFlags 0.35
353 TestNetworkPlugins/group/auto/NetCatPod 11.28
354 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
355 TestNetworkPlugins/group/flannel/NetCatPod 10.36
356 TestNetworkPlugins/group/auto/DNS 0.2
357 TestNetworkPlugins/group/auto/Localhost 0.15
358 TestNetworkPlugins/group/auto/HairPin 0.16
359 TestNetworkPlugins/group/flannel/DNS 0.18
360 TestNetworkPlugins/group/flannel/Localhost 0.27
361 TestNetworkPlugins/group/flannel/HairPin 0.2
362 TestNetworkPlugins/group/enable-default-cni/Start 50.03
363 TestNetworkPlugins/group/kindnet/Start 55.55
364 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.4
365 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.3
366 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
367 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
368 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
369 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
370 TestNetworkPlugins/group/kindnet/KubeletFlags 0.46
371 TestNetworkPlugins/group/kindnet/NetCatPod 11.38
372 TestNetworkPlugins/group/kindnet/DNS 0.25
373 TestNetworkPlugins/group/kindnet/Localhost 0.15
374 TestNetworkPlugins/group/kindnet/HairPin 0.16
375 TestNetworkPlugins/group/bridge/Start 51.35
376 TestNetworkPlugins/group/calico/Start 66.95
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
378 TestNetworkPlugins/group/bridge/NetCatPod 10.44
379 TestNetworkPlugins/group/bridge/DNS 0.25
380 TestNetworkPlugins/group/bridge/Localhost 0.2
381 TestNetworkPlugins/group/bridge/HairPin 0.21
382 TestNetworkPlugins/group/custom-flannel/Start 57.49
383 TestNetworkPlugins/group/calico/ControllerPod 6.01
384 TestNetworkPlugins/group/calico/KubeletFlags 0.37
385 TestNetworkPlugins/group/calico/NetCatPod 10.38
386 TestNetworkPlugins/group/calico/DNS 0.28
387 TestNetworkPlugins/group/calico/Localhost 0.2
388 TestNetworkPlugins/group/calico/HairPin 0.18
389 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.29
390 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.32
391 TestNetworkPlugins/group/custom-flannel/DNS 0.38
392 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
393 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
x
+
TestDownloadOnly/v1.20.0/json-events (9.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-078725 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-078725 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.151782171s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (9.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-078725
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-078725: exit status 85 (71.441102ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-078725 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |          |
	|         | -p download-only-078725        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:36:23
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:36:23.510756  298260 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:36:23.510966  298260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:36:23.510976  298260 out.go:358] Setting ErrFile to fd 2...
	I0914 17:36:23.510982  298260 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:36:23.511262  298260 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	W0914 17:36:23.511426  298260 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19643-292860/.minikube/config/config.json: open /home/jenkins/minikube-integration/19643-292860/.minikube/config/config.json: no such file or directory
	I0914 17:36:23.511962  298260 out.go:352] Setting JSON to true
	I0914 17:36:23.513016  298260 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4735,"bootTime":1726330648,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 17:36:23.513098  298260 start.go:139] virtualization:  
	I0914 17:36:23.516236  298260 out.go:97] [download-only-078725] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0914 17:36:23.516420  298260 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball: no such file or directory
	I0914 17:36:23.516470  298260 notify.go:220] Checking for updates...
	I0914 17:36:23.518303  298260 out.go:169] MINIKUBE_LOCATION=19643
	I0914 17:36:23.520048  298260 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:36:23.521749  298260 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 17:36:23.523491  298260 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	I0914 17:36:23.525214  298260 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 17:36:23.528742  298260 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 17:36:23.529016  298260 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:36:23.550952  298260 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 17:36:23.551079  298260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:36:23.615374  298260 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 17:36:23.605983095 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:36:23.615491  298260 docker.go:318] overlay module found
	I0914 17:36:23.617375  298260 out.go:97] Using the docker driver based on user configuration
	I0914 17:36:23.617397  298260 start.go:297] selected driver: docker
	I0914 17:36:23.617403  298260 start.go:901] validating driver "docker" against <nil>
	I0914 17:36:23.617501  298260 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:36:23.674306  298260 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 17:36:23.665224353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:36:23.674552  298260 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 17:36:23.674831  298260 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 17:36:23.674989  298260 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 17:36:23.676788  298260 out.go:169] Using Docker driver with root privileges
	I0914 17:36:23.678617  298260 cni.go:84] Creating CNI manager for ""
	I0914 17:36:23.678681  298260 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 17:36:23.678693  298260 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 17:36:23.678773  298260 start.go:340] cluster config:
	{Name:download-only-078725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-078725 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:36:23.680225  298260 out.go:97] Starting "download-only-078725" primary control-plane node in "download-only-078725" cluster
	I0914 17:36:23.680247  298260 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 17:36:23.681725  298260 out.go:97] Pulling base image v0.0.45-1726281268-19643 ...
	I0914 17:36:23.681750  298260 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 17:36:23.681893  298260 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local docker daemon
	I0914 17:36:23.699161  298260 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 17:36:23.699426  298260 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory
	I0914 17:36:23.699571  298260 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 17:36:23.742558  298260 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0914 17:36:23.742597  298260 cache.go:56] Caching tarball of preloaded images
	I0914 17:36:23.742780  298260 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0914 17:36:23.744966  298260 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0914 17:36:23.744992  298260 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0914 17:36:23.828646  298260 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-078725 host does not exist
	  To start a cluster, run: "minikube start -p download-only-078725"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-078725
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-574797 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-574797 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.73570016s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-574797
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-574797: exit status 85 (69.884622ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-078725 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	|         | -p download-only-078725        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| delete  | -p download-only-078725        | download-only-078725 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC | 14 Sep 24 17:36 UTC |
	| start   | -o=json --download-only        | download-only-574797 | jenkins | v1.34.0 | 14 Sep 24 17:36 UTC |                     |
	|         | -p download-only-574797        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/14 17:36:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0914 17:36:33.085319  298456 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:36:33.085547  298456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:36:33.085575  298456 out.go:358] Setting ErrFile to fd 2...
	I0914 17:36:33.085595  298456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:36:33.085908  298456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 17:36:33.086411  298456 out.go:352] Setting JSON to true
	I0914 17:36:33.087392  298456 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4745,"bootTime":1726330648,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 17:36:33.087501  298456 start.go:139] virtualization:  
	I0914 17:36:33.090577  298456 out.go:97] [download-only-574797] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 17:36:33.090851  298456 notify.go:220] Checking for updates...
	I0914 17:36:33.093001  298456 out.go:169] MINIKUBE_LOCATION=19643
	I0914 17:36:33.095341  298456 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:36:33.097218  298456 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 17:36:33.099279  298456 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	I0914 17:36:33.101284  298456 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0914 17:36:33.104890  298456 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0914 17:36:33.105190  298456 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:36:33.133717  298456 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 17:36:33.133841  298456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:36:33.196121  298456 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-14 17:36:33.18626902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:36:33.196238  298456 docker.go:318] overlay module found
	I0914 17:36:33.198250  298456 out.go:97] Using the docker driver based on user configuration
	I0914 17:36:33.198281  298456 start.go:297] selected driver: docker
	I0914 17:36:33.198288  298456 start.go:901] validating driver "docker" against <nil>
	I0914 17:36:33.198404  298456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:36:33.253765  298456 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-14 17:36:33.24434862 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:36:33.253984  298456 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0914 17:36:33.254283  298456 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0914 17:36:33.254446  298456 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0914 17:36:33.256528  298456 out.go:169] Using Docker driver with root privileges
	I0914 17:36:33.258544  298456 cni.go:84] Creating CNI manager for ""
	I0914 17:36:33.258614  298456 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0914 17:36:33.258623  298456 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0914 17:36:33.258708  298456 start.go:340] cluster config:
	{Name:download-only-574797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-574797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:36:33.260675  298456 out.go:97] Starting "download-only-574797" primary control-plane node in "download-only-574797" cluster
	I0914 17:36:33.260713  298456 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0914 17:36:33.262786  298456 out.go:97] Pulling base image v0.0.45-1726281268-19643 ...
	I0914 17:36:33.262826  298456 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 17:36:33.263028  298456 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local docker daemon
	I0914 17:36:33.279316  298456 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e to local cache
	I0914 17:36:33.279462  298456 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory
	I0914 17:36:33.279487  298456 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e in local cache directory, skipping pull
	I0914 17:36:33.279496  298456 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e exists in cache, skipping pull
	I0914 17:36:33.279504  298456 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e as a tarball
	I0914 17:36:33.315737  298456 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0914 17:36:33.315765  298456 cache.go:56] Caching tarball of preloaded images
	I0914 17:36:33.315946  298456 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 17:36:33.317950  298456 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0914 17:36:33.317979  298456 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 17:36:33.386065  298456 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0914 17:36:37.226363  298456 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 17:36:37.226471  298456 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19643-292860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0914 17:36:38.082746  298456 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0914 17:36:38.083182  298456 profile.go:143] Saving config to /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/download-only-574797/config.json ...
	I0914 17:36:38.083221  298456 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/download-only-574797/config.json: {Name:mk5d33723ecbd6326f9b6e3059d0e5f8f4848e23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0914 17:36:38.083884  298456 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0914 17:36:38.084517  298456 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19643-292860/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-574797 host does not exist
	  To start a cluster, run: "minikube start -p download-only-574797"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-574797
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-941961 --alsologtostderr --binary-mirror http://127.0.0.1:39039 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-941961" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-941961
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-478069
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-478069: exit status 85 (65.308472ms)

                                                
                                                
-- stdout --
	* Profile "addons-478069" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-478069"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-478069
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-478069: exit status 85 (60.070111ms)

                                                
                                                
-- stdout --
	* Profile "addons-478069" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-478069"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (217.15s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-478069 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-478069 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m37.150021589s)
--- PASS: TestAddons/Setup (217.15s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-478069 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-478069 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.19503ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-z88fn" [5f368de6-9d7a-4369-b81a-e84fc032aa5e] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005505929s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-c8cfs" [81b66c2e-ca87-4896-b408-0013c9df1d76] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004091933s
addons_test.go:342: (dbg) Run:  kubectl --context addons-478069 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-478069 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-478069 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.931743224s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 ip
2024/09/14 17:44:13 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.94s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-478069 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-478069 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-478069 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [44067672-4b3d-4bc4-ada3-b3445180063a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [44067672-4b3d-4bc4-ada3-b3445180063a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003717376s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-478069 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-478069 addons disable ingress-dns --alsologtostderr -v=1: (1.203804746s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-478069 addons disable ingress --alsologtostderr -v=1: (7.795774656s)
--- PASS: TestAddons/parallel/Ingress (20.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.01s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-5nf8v" [2d1cefd4-5000-4feb-b972-b1fa0c546aa3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004154044s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-478069
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-478069: (6.002737822s)
--- PASS: TestAddons/parallel/InspektorGadget (11.01s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.05s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 3.221179ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-hzswq" [17e40b4b-651d-463b-b551-a297450fe05f] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.007900225s
addons_test.go:417: (dbg) Run:  kubectl --context addons-478069 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (7.05s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 5.162911ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-478069 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-478069 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a3e901c5-1efe-4e57-a3de-3d8bc1878b1d] Pending
helpers_test.go:344: "task-pv-pod" [a3e901c5-1efe-4e57-a3de-3d8bc1878b1d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a3e901c5-1efe-4e57-a3de-3d8bc1878b1d] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.0042178s
addons_test.go:590: (dbg) Run:  kubectl --context addons-478069 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-478069 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-478069 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-478069 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-478069 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-478069 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-478069 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [40310279-0e5e-44fc-8d8f-051f72c44f1e] Pending
helpers_test.go:344: "task-pv-pod-restore" [40310279-0e5e-44fc-8d8f-051f72c44f1e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [40310279-0e5e-44fc-8d8f-051f72c44f1e] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004442137s
addons_test.go:632: (dbg) Run:  kubectl --context addons-478069 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-478069 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-478069 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-478069 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.909174182s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-478069 addons disable volumesnapshots --alsologtostderr -v=1: (1.052370025s)
--- PASS: TestAddons/parallel/CSI (41.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-478069 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-478069 --alsologtostderr -v=1: (1.069564835s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-4tx8g" [37a9c48f-89b6-473e-b75c-edea6610b454] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-4tx8g" [37a9c48f-89b6-473e-b75c-edea6610b454] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-4tx8g" [37a9c48f-89b6-473e-b75c-edea6610b454] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003775543s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-478069 addons disable headlamp --alsologtostderr -v=1: (5.838908638s)
--- PASS: TestAddons/parallel/Headlamp (16.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-rgckg" [54f7f4ac-f575-4ff6-b123-aa2cc693cdff] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004426853s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-478069
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.75s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-478069 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-478069 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-478069 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [58a3e9fa-dc88-4bf1-bf51-b728986ada42] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [58a3e9fa-dc88-4bf1-bf51-b728986ada42] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [58a3e9fa-dc88-4bf1-bf51-b728986ada42] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003922723s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-478069 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 ssh "cat /opt/local-path-provisioner/pvc-6ae08916-59ca-4ef2-97dc-d57fe2871ecf_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-478069 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-478069 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.75s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.93s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-5mrzf" [07826017-ddb7-43a6-9266-8cc86a3a9114] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004561905s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-478069
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.93s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-547j8" [d4b5a192-c5da-469d-99f2-460e452cf7c6] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003454831s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-478069 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-478069 addons disable yakd --alsologtostderr -v=1: (5.838981912s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-478069
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-478069: (12.01525948s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-478069
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-478069
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-478069
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (37.06s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-553239 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
E0914 18:23:20.864426  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-553239 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.370188051s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-553239 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-553239 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-553239 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-553239" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-553239
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-553239: (2.007288184s)
--- PASS: TestCertOptions (37.06s)

                                                
                                    
x
+
TestCertExpiration (228.53s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-340657 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-340657 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.208726764s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-340657 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-340657 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (9.080014681s)
helpers_test.go:175: Cleaning up "cert-expiration-340657" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-340657
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-340657: (2.24506356s)
--- PASS: TestCertExpiration (228.53s)

                                                
                                    
x
+
TestForceSystemdFlag (45.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-560938 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-560938 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.408754654s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-560938 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-560938" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-560938
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-560938: (2.334783596s)
--- PASS: TestForceSystemdFlag (45.22s)

                                                
                                    
x
+
TestForceSystemdEnv (42.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-676567 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-676567 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (39.53436393s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-676567 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-676567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-676567
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-676567: (2.290165738s)
--- PASS: TestForceSystemdEnv (42.28s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.97s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-050610 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-050610 --driver=docker  --container-runtime=containerd: (31.464632409s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-050610"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-050610": (1.019124437s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lcfyn6SwaRu5/agent.316815" SSH_AGENT_PID="316816" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lcfyn6SwaRu5/agent.316815" SSH_AGENT_PID="316816" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lcfyn6SwaRu5/agent.316815" SSH_AGENT_PID="316816" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.189570496s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-lcfyn6SwaRu5/agent.316815" SSH_AGENT_PID="316816" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-050610" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-050610
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-050610: (1.94689756s)
--- PASS: TestDockerEnvContainerd (46.97s)

                                                
                                    
x
+
TestErrorSpam/setup (28.55s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-579581 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-579581 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-579581 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-579581 --driver=docker  --container-runtime=containerd: (28.546261231s)
--- PASS: TestErrorSpam/setup (28.55s)

                                                
                                    
x
+
TestErrorSpam/start (0.76s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 start --dry-run
--- PASS: TestErrorSpam/start (0.76s)

                                                
                                    
x
+
TestErrorSpam/status (1.07s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 status
--- PASS: TestErrorSpam/status (1.07s)

                                                
                                    
x
+
TestErrorSpam/pause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 pause
--- PASS: TestErrorSpam/pause (1.76s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 unpause
--- PASS: TestErrorSpam/unpause (1.89s)

                                                
                                    
x
+
TestErrorSpam/stop (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 stop: (1.280131004s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-579581 --log_dir /tmp/nospam-579581 stop
--- PASS: TestErrorSpam/stop (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19643-292860/.minikube/files/etc/test/nested/copy/298255/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (47.8s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-884273 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-884273 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (47.792835985s)
--- PASS: TestFunctional/serial/StartWithProxy (47.80s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.57s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-884273 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-884273 --alsologtostderr -v=8: (6.561850838s)
functional_test.go:663: soft start took 6.566625868s for "functional-884273" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.57s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-884273 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 cache add registry.k8s.io/pause:3.1: (1.768932492s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 cache add registry.k8s.io/pause:3.3: (1.44075322s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 cache add registry.k8s.io/pause:latest: (1.11143609s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-884273 /tmp/TestFunctionalserialCacheCmdcacheadd_local1176200180/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cache add minikube-local-cache-test:functional-884273
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cache delete minikube-local-cache-test:functional-884273
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-884273
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (297.211841ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 cache reload: (1.137534446s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 kubectl -- --context functional-884273 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-884273 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.48s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-884273 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-884273 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.475445954s)
functional_test.go:761: restart took 46.475563165s for "functional-884273" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (46.48s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-884273 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 logs: (1.752101951s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 logs --file /tmp/TestFunctionalserialLogsFileCmd2820762145/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 logs --file /tmp/TestFunctionalserialLogsFileCmd2820762145/001/logs.txt: (1.776116956s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-884273 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-884273
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-884273: exit status 115 (576.043749ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31383 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-884273 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 config get cpus: exit status 14 (94.493497ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 config get cpus: exit status 14 (50.371652ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-884273 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-884273 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 333735: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.10s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-884273 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-884273 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (167.51167ms)

                                                
                                                
-- stdout --
	* [functional-884273] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:49:43.834404  333189 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:49:43.834529  333189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:49:43.834540  333189 out.go:358] Setting ErrFile to fd 2...
	I0914 17:49:43.834546  333189 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:49:43.834797  333189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 17:49:43.835202  333189 out.go:352] Setting JSON to false
	I0914 17:49:43.836153  333189 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5536,"bootTime":1726330648,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 17:49:43.836229  333189 start.go:139] virtualization:  
	I0914 17:49:43.838799  333189 out.go:177] * [functional-884273] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 17:49:43.841137  333189 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:49:43.841263  333189 notify.go:220] Checking for updates...
	I0914 17:49:43.845637  333189 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:49:43.847488  333189 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 17:49:43.849763  333189 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	I0914 17:49:43.851657  333189 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 17:49:43.853262  333189 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:49:43.855383  333189 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 17:49:43.855922  333189 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:49:43.885197  333189 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 17:49:43.885331  333189 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:49:43.941033  333189 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 17:49:43.931466793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:49:43.941146  333189 docker.go:318] overlay module found
	I0914 17:49:43.943071  333189 out.go:177] * Using the docker driver based on existing profile
	I0914 17:49:43.944846  333189 start.go:297] selected driver: docker
	I0914 17:49:43.944866  333189 start.go:901] validating driver "docker" against &{Name:functional-884273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-884273 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:49:43.944979  333189 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:49:43.947211  333189 out.go:201] 
	W0914 17:49:43.948924  333189 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0914 17:49:43.950712  333189 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-884273 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-884273 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-884273 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (277.127365ms)

                                                
                                                
-- stdout --
	* [functional-884273] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:49:44.287265  333299 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:49:44.287479  333299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:49:44.287503  333299 out.go:358] Setting ErrFile to fd 2...
	I0914 17:49:44.287525  333299 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:49:44.289548  333299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 17:49:44.290016  333299 out.go:352] Setting JSON to false
	I0914 17:49:44.291010  333299 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5536,"bootTime":1726330648,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 17:49:44.291121  333299 start.go:139] virtualization:  
	I0914 17:49:44.294566  333299 out.go:177] * [functional-884273] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0914 17:49:44.298917  333299 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 17:49:44.298860  333299 notify.go:220] Checking for updates...
	I0914 17:49:44.302341  333299 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 17:49:44.304226  333299 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 17:49:44.305863  333299 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	I0914 17:49:44.307245  333299 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 17:49:44.308832  333299 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 17:49:44.310782  333299 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 17:49:44.311310  333299 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 17:49:44.368296  333299 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 17:49:44.368434  333299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:49:44.473202  333299 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-14 17:49:44.460984163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:49:44.473392  333299 docker.go:318] overlay module found
	I0914 17:49:44.475580  333299 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0914 17:49:44.477598  333299 start.go:297] selected driver: docker
	I0914 17:49:44.477619  333299 start.go:901] validating driver "docker" against &{Name:functional-884273 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726281268-19643@sha256:cce8be0c1ac4e3d852132008ef1cc1dcf5b79f708d025db83f146ae65db32e8e Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-884273 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0914 17:49:44.477721  333299 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 17:49:44.479925  333299 out.go:201] 
	W0914 17:49:44.481789  333299 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0914 17:49:44.483413  333299 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-884273 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-884273 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-x7x5c" [813102b8-1ae5-4d6f-8ca0-b1814f3496bd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-x7x5c" [813102b8-1ae5-4d6f-8ca0-b1814f3496bd] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.00406165s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30579
functional_test.go:1675: http://192.168.49.2:30579: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-x7x5c

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30579
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (23.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [ba43ee8d-f5cf-49ea-95e1-1aa671e526a4] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003194102s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-884273 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-884273 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-884273 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-884273 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [06a7993e-b47c-47ce-88be-ee4325193ee7] Pending
helpers_test.go:344: "sp-pod" [06a7993e-b47c-47ce-88be-ee4325193ee7] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [06a7993e-b47c-47ce-88be-ee4325193ee7] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00356917s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-884273 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-884273 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-884273 delete -f testdata/storage-provisioner/pod.yaml: (1.654721887s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-884273 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c7f3593c-94ff-4a0b-a76f-1214cb4a3f0d] Pending
helpers_test.go:344: "sp-pod" [c7f3593c-94ff-4a0b-a76f-1214cb4a3f0d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.00496242s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-884273 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (23.73s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh -n functional-884273 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cp functional-884273:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3430877458/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh -n functional-884273 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh -n functional-884273 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/298255/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo cat /etc/test/nested/copy/298255/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/298255.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo cat /etc/ssl/certs/298255.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/298255.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo cat /usr/share/ca-certificates/298255.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2982552.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo cat /etc/ssl/certs/2982552.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2982552.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo cat /usr/share/ca-certificates/2982552.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.12s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-884273 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 ssh "sudo systemctl is-active docker": exit status 1 (354.103266ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 ssh "sudo systemctl is-active crio": exit status 1 (352.971986ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 version -o=json --components: (1.294680576s)
--- PASS: TestFunctional/parallel/Version/components (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-884273 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-884273
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-884273
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-884273 image ls --format short --alsologtostderr:
I0914 17:49:46.617317  333707 out.go:345] Setting OutFile to fd 1 ...
I0914 17:49:46.617546  333707 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:46.617568  333707 out.go:358] Setting ErrFile to fd 2...
I0914 17:49:46.617597  333707 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:46.617884  333707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
I0914 17:49:46.618573  333707 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:46.618724  333707 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:46.619226  333707 cli_runner.go:164] Run: docker container inspect functional-884273 --format={{.State.Status}}
I0914 17:49:46.636948  333707 ssh_runner.go:195] Run: systemctl --version
I0914 17:49:46.637002  333707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-884273
I0914 17:49:46.657881  333707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/functional-884273/id_rsa Username:docker}
I0914 17:49:46.773671  333707 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-884273 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-884273  | sha256:ce2d2c | 2.17MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/library/minikube-local-cache-test | functional-884273  | sha256:9bad62 | 992B   |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| localhost/my-image                          | functional-884273  | sha256:2844e6 | 831kB  |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-884273 image ls --format table --alsologtostderr:
I0914 17:49:51.077083  334220 out.go:345] Setting OutFile to fd 1 ...
I0914 17:49:51.077387  334220 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:51.077411  334220 out.go:358] Setting ErrFile to fd 2...
I0914 17:49:51.077429  334220 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:51.077806  334220 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
I0914 17:49:51.078824  334220 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:51.079019  334220 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:51.079757  334220 cli_runner.go:164] Run: docker container inspect functional-884273 --format={{.State.Status}}
I0914 17:49:51.109592  334220 ssh_runner.go:195] Run: systemctl --version
I0914 17:49:51.109647  334220 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-884273
I0914 17:49:51.129715  334220 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/functional-884273/id_rsa Username:docker}
I0914 17:49:51.229334  334220 ssh_runner.go:195] Run: sudo crictl images --output json
2024/09/14 17:49:58 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-884273 image ls --format json --alsologtostderr:
[{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:2844e69df40fbe22c02a032884db241191d8178179baba84e6d33b04d28c732a","repoDigests":[],"repoTags":["localhost/my-image:functional-884273"],"size":"830617"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:9bad6257c04bb46b244ffa0117410dce208679c03bc6176ad0957197a1700e37","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-884273"],"size":"992"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-s
craper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[
"registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c7484
19a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-884273"],"size":"2173567"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:3d18732f868
6cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-884273 image ls --format json --alsologtostderr:
I0914 17:49:50.813668  334188 out.go:345] Setting OutFile to fd 1 ...
I0914 17:49:50.813824  334188 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:50.813851  334188 out.go:358] Setting ErrFile to fd 2...
I0914 17:49:50.813858  334188 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:50.814117  334188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
I0914 17:49:50.816059  334188 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:50.816251  334188 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:50.816793  334188 cli_runner.go:164] Run: docker container inspect functional-884273 --format={{.State.Status}}
I0914 17:49:50.837821  334188 ssh_runner.go:195] Run: systemctl --version
I0914 17:49:50.837879  334188 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-884273
I0914 17:49:50.856050  334188 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/functional-884273/id_rsa Username:docker}
I0914 17:49:50.952798  334188 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-884273 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-884273
size: "2173567"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:9bad6257c04bb46b244ffa0117410dce208679c03bc6176ad0957197a1700e37
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-884273
size: "992"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-884273 image ls --format yaml --alsologtostderr:
I0914 17:49:46.933918  333747 out.go:345] Setting OutFile to fd 1 ...
I0914 17:49:46.934106  333747 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:46.934134  333747 out.go:358] Setting ErrFile to fd 2...
I0914 17:49:46.934155  333747 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:46.934410  333747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
I0914 17:49:46.935160  333747 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:46.935338  333747 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:46.935965  333747 cli_runner.go:164] Run: docker container inspect functional-884273 --format={{.State.Status}}
I0914 17:49:46.955332  333747 ssh_runner.go:195] Run: systemctl --version
I0914 17:49:46.955385  333747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-884273
I0914 17:49:46.975117  333747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/functional-884273/id_rsa Username:docker}
I0914 17:49:47.072885  333747 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 ssh pgrep buildkitd: exit status 1 (269.60802ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image build -t localhost/my-image:functional-884273 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 image build -t localhost/my-image:functional-884273 testdata/build --alsologtostderr: (3.102378744s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-884273 image build -t localhost/my-image:functional-884273 testdata/build --alsologtostderr:
I0914 17:49:47.439111  333836 out.go:345] Setting OutFile to fd 1 ...
I0914 17:49:47.439834  333836 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:47.439849  333836 out.go:358] Setting ErrFile to fd 2...
I0914 17:49:47.439855  333836 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0914 17:49:47.440112  333836 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
I0914 17:49:47.440778  333836 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:47.441876  333836 config.go:182] Loaded profile config "functional-884273": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0914 17:49:47.442374  333836 cli_runner.go:164] Run: docker container inspect functional-884273 --format={{.State.Status}}
I0914 17:49:47.458993  333836 ssh_runner.go:195] Run: systemctl --version
I0914 17:49:47.459058  333836 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-884273
I0914 17:49:47.487726  333836 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/functional-884273/id_rsa Username:docker}
I0914 17:49:47.587976  333836 build_images.go:161] Building image from path: /tmp/build.369289831.tar
I0914 17:49:47.588048  333836 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0914 17:49:47.596993  333836 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.369289831.tar
I0914 17:49:47.600361  333836 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.369289831.tar: stat -c "%s %y" /var/lib/minikube/build/build.369289831.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.369289831.tar': No such file or directory
I0914 17:49:47.600398  333836 ssh_runner.go:362] scp /tmp/build.369289831.tar --> /var/lib/minikube/build/build.369289831.tar (3072 bytes)
I0914 17:49:47.625535  333836 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.369289831
I0914 17:49:47.639853  333836 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.369289831 -xf /var/lib/minikube/build/build.369289831.tar
I0914 17:49:47.650263  333836 containerd.go:394] Building image: /var/lib/minikube/build/build.369289831
I0914 17:49:47.650345  333836 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.369289831 --local dockerfile=/var/lib/minikube/build/build.369289831 --output type=image,name=localhost/my-image:functional-884273
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:d7f6692c6737a1225cc82e14ccaa390d3a44b9e5e616bbedff0da7792456a465
#8 exporting manifest sha256:d7f6692c6737a1225cc82e14ccaa390d3a44b9e5e616bbedff0da7792456a465 0.0s done
#8 exporting config sha256:2844e69df40fbe22c02a032884db241191d8178179baba84e6d33b04d28c732a 0.0s done
#8 naming to localhost/my-image:functional-884273 done
#8 DONE 0.2s
I0914 17:49:50.460943  333836 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.369289831 --local dockerfile=/var/lib/minikube/build/build.369289831 --output type=image,name=localhost/my-image:functional-884273: (2.810570566s)
I0914 17:49:50.461016  333836 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.369289831
I0914 17:49:50.471430  333836 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.369289831.tar
I0914 17:49:50.480840  333836 build_images.go:217] Built localhost/my-image:functional-884273 from /tmp/build.369289831.tar
I0914 17:49:50.480923  333836 build_images.go:133] succeeded building to: functional-884273
I0914 17:49:50.480931  333836 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-884273
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image load --daemon kicbase/echo-server:functional-884273 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 image load --daemon kicbase/echo-server:functional-884273 --alsologtostderr: (1.232913157s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image load --daemon kicbase/echo-server:functional-884273 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 image load --daemon kicbase/echo-server:functional-884273 --alsologtostderr: (1.105148763s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-884273 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-884273 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7b9vv" [c5bea21e-f770-405b-8fdd-a7d8e3f3c0a8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7b9vv" [c5bea21e-f770-405b-8fdd-a7d8e3f3c0a8] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004585009s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-884273
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image load --daemon kicbase/echo-server:functional-884273 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-884273 image load --daemon kicbase/echo-server:functional-884273 --alsologtostderr: (1.126840663s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image save kicbase/echo-server:functional-884273 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image rm kicbase/echo-server:functional-884273 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-884273
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 image save --daemon kicbase/echo-server:functional-884273 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-884273
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-884273 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-884273 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-884273 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 330131: os: process already finished
helpers_test.go:508: unable to kill pid 330013: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-884273 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-884273 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-884273 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1e337ee4-11ea-4daf-8996-49d32c6f72b6] Pending
helpers_test.go:344: "nginx-svc" [1e337ee4-11ea-4daf-8996-49d32c6f72b6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1e337ee4-11ea-4daf-8996-49d32c6f72b6] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003665729s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 service list -o json
functional_test.go:1494: Took "344.683082ms" to run "out/minikube-linux-arm64 -p functional-884273 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30861
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30861
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-884273 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.255.248 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-884273 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "372.238509ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "62.966702ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "318.586011ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "59.9886ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdany-port3173405418/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726336172576907739" to /tmp/TestFunctionalparallelMountCmdany-port3173405418/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726336172576907739" to /tmp/TestFunctionalparallelMountCmdany-port3173405418/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726336172576907739" to /tmp/TestFunctionalparallelMountCmdany-port3173405418/001/test-1726336172576907739
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.248369ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 14 17:49 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 14 17:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 14 17:49 test-1726336172576907739
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh cat /mount-9p/test-1726336172576907739
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-884273 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [da8fdce1-9934-4696-86ac-3230adbbcd46] Pending
helpers_test.go:344: "busybox-mount" [da8fdce1-9934-4696-86ac-3230adbbcd46] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [da8fdce1-9934-4696-86ac-3230adbbcd46] Running
helpers_test.go:344: "busybox-mount" [da8fdce1-9934-4696-86ac-3230adbbcd46] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [da8fdce1-9934-4696-86ac-3230adbbcd46] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003473954s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-884273 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdany-port3173405418/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdspecific-port1057816582/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (339.406175ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdspecific-port1057816582/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-884273 ssh "sudo umount -f /mount-9p": exit status 1 (348.321266ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-884273 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdspecific-port1057816582/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3789432636/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3789432636/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3789432636/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-884273 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-884273 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3789432636/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3789432636/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-884273 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3789432636/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-884273
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-884273
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-884273
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-865961 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0914 17:50:17.798430  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:17.805225  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:17.816634  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:17.838776  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:17.880149  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:17.961708  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:18.123183  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:18.444476  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:19.085832  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:20.367953  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:22.930279  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:28.051802  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:38.293461  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:50:58.774858  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:51:39.736202  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-865961 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m2.460211728s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (29.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-865961 -- rollout status deployment/busybox: (26.02837762s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-ffbx9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-lbjc4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-tsrvs -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-ffbx9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-lbjc4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-tsrvs -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-ffbx9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-lbjc4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-tsrvs -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (29.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-ffbx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-ffbx9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-lbjc4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-lbjc4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-tsrvs -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-865961 -- exec busybox-7dff88458-tsrvs -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-865961 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-865961 -v=7 --alsologtostderr: (20.441604008s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr: (1.018320248s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-865961 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-865961 status --output json -v=7 --alsologtostderr: (1.0319912s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp testdata/cp-test.txt ha-865961:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3666597246/001/cp-test_ha-865961.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961:/home/docker/cp-test.txt ha-865961-m02:/home/docker/cp-test_ha-865961_ha-865961-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m02 "sudo cat /home/docker/cp-test_ha-865961_ha-865961-m02.txt"
E0914 17:53:01.658913  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961:/home/docker/cp-test.txt ha-865961-m03:/home/docker/cp-test_ha-865961_ha-865961-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m03 "sudo cat /home/docker/cp-test_ha-865961_ha-865961-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961:/home/docker/cp-test.txt ha-865961-m04:/home/docker/cp-test_ha-865961_ha-865961-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m04 "sudo cat /home/docker/cp-test_ha-865961_ha-865961-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp testdata/cp-test.txt ha-865961-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3666597246/001/cp-test_ha-865961-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m02:/home/docker/cp-test.txt ha-865961:/home/docker/cp-test_ha-865961-m02_ha-865961.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961 "sudo cat /home/docker/cp-test_ha-865961-m02_ha-865961.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m02:/home/docker/cp-test.txt ha-865961-m03:/home/docker/cp-test_ha-865961-m02_ha-865961-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m03 "sudo cat /home/docker/cp-test_ha-865961-m02_ha-865961-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m02:/home/docker/cp-test.txt ha-865961-m04:/home/docker/cp-test_ha-865961-m02_ha-865961-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m04 "sudo cat /home/docker/cp-test_ha-865961-m02_ha-865961-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp testdata/cp-test.txt ha-865961-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3666597246/001/cp-test_ha-865961-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m03:/home/docker/cp-test.txt ha-865961:/home/docker/cp-test_ha-865961-m03_ha-865961.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961 "sudo cat /home/docker/cp-test_ha-865961-m03_ha-865961.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m03:/home/docker/cp-test.txt ha-865961-m02:/home/docker/cp-test_ha-865961-m03_ha-865961-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m02 "sudo cat /home/docker/cp-test_ha-865961-m03_ha-865961-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m03:/home/docker/cp-test.txt ha-865961-m04:/home/docker/cp-test_ha-865961-m03_ha-865961-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m04 "sudo cat /home/docker/cp-test_ha-865961-m03_ha-865961-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp testdata/cp-test.txt ha-865961-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3666597246/001/cp-test_ha-865961-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m04:/home/docker/cp-test.txt ha-865961:/home/docker/cp-test_ha-865961-m04_ha-865961.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961 "sudo cat /home/docker/cp-test_ha-865961-m04_ha-865961.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m04:/home/docker/cp-test.txt ha-865961-m02:/home/docker/cp-test_ha-865961-m04_ha-865961-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m02 "sudo cat /home/docker/cp-test_ha-865961-m04_ha-865961-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 cp ha-865961-m04:/home/docker/cp-test.txt ha-865961-m03:/home/docker/cp-test_ha-865961-m04_ha-865961-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 ssh -n ha-865961-m03 "sudo cat /home/docker/cp-test_ha-865961-m04_ha-865961-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-865961 node stop m02 -v=7 --alsologtostderr: (12.122669731s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr: exit status 7 (861.329429ms)

                                                
                                                
-- stdout --
	ha-865961
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-865961-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-865961-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-865961-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:53:29.712597  350165 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:53:29.712824  350165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:53:29.712853  350165 out.go:358] Setting ErrFile to fd 2...
	I0914 17:53:29.712875  350165 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:53:29.713178  350165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 17:53:29.713423  350165 out.go:352] Setting JSON to false
	I0914 17:53:29.713502  350165 mustload.go:65] Loading cluster: ha-865961
	I0914 17:53:29.713602  350165 notify.go:220] Checking for updates...
	I0914 17:53:29.714134  350165 config.go:182] Loaded profile config "ha-865961": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 17:53:29.714463  350165 status.go:255] checking status of ha-865961 ...
	I0914 17:53:29.715175  350165 cli_runner.go:164] Run: docker container inspect ha-865961 --format={{.State.Status}}
	I0914 17:53:29.749360  350165 status.go:330] ha-865961 host status = "Running" (err=<nil>)
	I0914 17:53:29.749387  350165 host.go:66] Checking if "ha-865961" exists ...
	I0914 17:53:29.749713  350165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-865961
	I0914 17:53:29.772983  350165 host.go:66] Checking if "ha-865961" exists ...
	I0914 17:53:29.773288  350165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:53:29.773345  350165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-865961
	I0914 17:53:29.791712  350165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/ha-865961/id_rsa Username:docker}
	I0914 17:53:29.890618  350165 ssh_runner.go:195] Run: systemctl --version
	I0914 17:53:29.895359  350165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:53:29.908617  350165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 17:53:29.986509  350165 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-14 17:53:29.97501166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 17:53:29.987363  350165 kubeconfig.go:125] found "ha-865961" server: "https://192.168.49.254:8443"
	I0914 17:53:29.987399  350165 api_server.go:166] Checking apiserver status ...
	I0914 17:53:29.987452  350165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:53:30.001002  350165 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1515/cgroup
	I0914 17:53:30.034179  350165 api_server.go:182] apiserver freezer: "10:freezer:/docker/8076255754cce2ad4175a9fc82b6f0043a3e0ac71ea5b069d5e1c9e29a39800f/kubepods/burstable/pod05c59dd9c0335842a22d87fc9613380e/3517405a87cb59625c0ce00f5a3cd939e4d394a1553892ed5a15b921a0e05474"
	I0914 17:53:30.034268  350165 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/8076255754cce2ad4175a9fc82b6f0043a3e0ac71ea5b069d5e1c9e29a39800f/kubepods/burstable/pod05c59dd9c0335842a22d87fc9613380e/3517405a87cb59625c0ce00f5a3cd939e4d394a1553892ed5a15b921a0e05474/freezer.state
	I0914 17:53:30.074228  350165 api_server.go:204] freezer state: "THAWED"
	I0914 17:53:30.074260  350165 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0914 17:53:30.085818  350165 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0914 17:53:30.085853  350165 status.go:422] ha-865961 apiserver status = Running (err=<nil>)
	I0914 17:53:30.085866  350165 status.go:257] ha-865961 status: &{Name:ha-865961 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:53:30.085922  350165 status.go:255] checking status of ha-865961-m02 ...
	I0914 17:53:30.086288  350165 cli_runner.go:164] Run: docker container inspect ha-865961-m02 --format={{.State.Status}}
	I0914 17:53:30.105629  350165 status.go:330] ha-865961-m02 host status = "Stopped" (err=<nil>)
	I0914 17:53:30.105657  350165 status.go:343] host is not running, skipping remaining checks
	I0914 17:53:30.105667  350165 status.go:257] ha-865961-m02 status: &{Name:ha-865961-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:53:30.105689  350165 status.go:255] checking status of ha-865961-m03 ...
	I0914 17:53:30.106070  350165 cli_runner.go:164] Run: docker container inspect ha-865961-m03 --format={{.State.Status}}
	I0914 17:53:30.131086  350165 status.go:330] ha-865961-m03 host status = "Running" (err=<nil>)
	I0914 17:53:30.131117  350165 host.go:66] Checking if "ha-865961-m03" exists ...
	I0914 17:53:30.131488  350165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-865961-m03
	I0914 17:53:30.152293  350165 host.go:66] Checking if "ha-865961-m03" exists ...
	I0914 17:53:30.152662  350165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:53:30.152716  350165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-865961-m03
	I0914 17:53:30.174640  350165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/ha-865961-m03/id_rsa Username:docker}
	I0914 17:53:30.277699  350165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:53:30.292012  350165 kubeconfig.go:125] found "ha-865961" server: "https://192.168.49.254:8443"
	I0914 17:53:30.292054  350165 api_server.go:166] Checking apiserver status ...
	I0914 17:53:30.292106  350165 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 17:53:30.312357  350165 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1334/cgroup
	I0914 17:53:30.324873  350165 api_server.go:182] apiserver freezer: "10:freezer:/docker/31feffbd1b4a689baaf3829b05db99c529d94cd7e42d21ed01f366b890aee451/kubepods/burstable/pod55c0719e686686861d7d966e18e07791/cd5326b31fc427e2b04c5d4fd4b34a4ee2e5c23140e15cd52a147882fd9734f2"
	I0914 17:53:30.324949  350165 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/31feffbd1b4a689baaf3829b05db99c529d94cd7e42d21ed01f366b890aee451/kubepods/burstable/pod55c0719e686686861d7d966e18e07791/cd5326b31fc427e2b04c5d4fd4b34a4ee2e5c23140e15cd52a147882fd9734f2/freezer.state
	I0914 17:53:30.335728  350165 api_server.go:204] freezer state: "THAWED"
	I0914 17:53:30.335762  350165 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0914 17:53:30.343994  350165 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0914 17:53:30.344024  350165 status.go:422] ha-865961-m03 apiserver status = Running (err=<nil>)
	I0914 17:53:30.344042  350165 status.go:257] ha-865961-m03 status: &{Name:ha-865961-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:53:30.344060  350165 status.go:255] checking status of ha-865961-m04 ...
	I0914 17:53:30.344401  350165 cli_runner.go:164] Run: docker container inspect ha-865961-m04 --format={{.State.Status}}
	I0914 17:53:30.362644  350165 status.go:330] ha-865961-m04 host status = "Running" (err=<nil>)
	I0914 17:53:30.362670  350165 host.go:66] Checking if "ha-865961-m04" exists ...
	I0914 17:53:30.362974  350165 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-865961-m04
	I0914 17:53:30.381244  350165 host.go:66] Checking if "ha-865961-m04" exists ...
	I0914 17:53:30.381559  350165 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 17:53:30.381611  350165 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-865961-m04
	I0914 17:53:30.400319  350165 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/ha-865961-m04/id_rsa Username:docker}
	I0914 17:53:30.500711  350165 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 17:53:30.512556  350165 status.go:257] ha-865961-m04 status: &{Name:ha-865961-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-865961 node start m02 -v=7 --alsologtostderr: (17.79681669s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr: (1.020804149s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (154.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-865961 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-865961 -v=7 --alsologtostderr
E0914 17:54:07.884820  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:07.891313  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:07.902761  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:07.924258  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:07.965709  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:08.047156  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:08.208812  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:08.530604  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:09.172640  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:10.453992  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:13.016050  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:18.138323  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-865961 -v=7 --alsologtostderr: (37.271030881s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-865961 --wait=true -v=7 --alsologtostderr
E0914 17:54:28.380729  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:54:48.863399  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:17.796851  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:29.824714  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 17:55:45.500756  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-865961 --wait=true -v=7 --alsologtostderr: (1m56.809879166s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-865961
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (154.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-865961 node delete m03 -v=7 --alsologtostderr: (9.544332926s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 stop -v=7 --alsologtostderr
E0914 17:56:51.747892  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-865961 stop -v=7 --alsologtostderr: (36.068288518s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr: exit status 7 (110.363484ms)

                                                
                                                
-- stdout --
	ha-865961
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-865961-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-865961-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 17:57:12.305497  364468 out.go:345] Setting OutFile to fd 1 ...
	I0914 17:57:12.305662  364468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:57:12.305674  364468 out.go:358] Setting ErrFile to fd 2...
	I0914 17:57:12.305680  364468 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 17:57:12.305923  364468 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 17:57:12.306105  364468 out.go:352] Setting JSON to false
	I0914 17:57:12.306133  364468 mustload.go:65] Loading cluster: ha-865961
	I0914 17:57:12.306260  364468 notify.go:220] Checking for updates...
	I0914 17:57:12.306561  364468 config.go:182] Loaded profile config "ha-865961": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 17:57:12.306572  364468 status.go:255] checking status of ha-865961 ...
	I0914 17:57:12.307496  364468 cli_runner.go:164] Run: docker container inspect ha-865961 --format={{.State.Status}}
	I0914 17:57:12.324905  364468 status.go:330] ha-865961 host status = "Stopped" (err=<nil>)
	I0914 17:57:12.324929  364468 status.go:343] host is not running, skipping remaining checks
	I0914 17:57:12.324936  364468 status.go:257] ha-865961 status: &{Name:ha-865961 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:57:12.324973  364468 status.go:255] checking status of ha-865961-m02 ...
	I0914 17:57:12.325373  364468 cli_runner.go:164] Run: docker container inspect ha-865961-m02 --format={{.State.Status}}
	I0914 17:57:12.342763  364468 status.go:330] ha-865961-m02 host status = "Stopped" (err=<nil>)
	I0914 17:57:12.342788  364468 status.go:343] host is not running, skipping remaining checks
	I0914 17:57:12.342795  364468 status.go:257] ha-865961-m02 status: &{Name:ha-865961-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 17:57:12.342822  364468 status.go:255] checking status of ha-865961-m04 ...
	I0914 17:57:12.343126  364468 cli_runner.go:164] Run: docker container inspect ha-865961-m04 --format={{.State.Status}}
	I0914 17:57:12.365442  364468 status.go:330] ha-865961-m04 host status = "Stopped" (err=<nil>)
	I0914 17:57:12.365466  364468 status.go:343] host is not running, skipping remaining checks
	I0914 17:57:12.365474  364468 status.go:257] ha-865961-m04 status: &{Name:ha-865961-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-865961 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-865961 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.96552619s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-865961 --control-plane -v=7 --alsologtostderr
E0914 17:59:07.883839  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-865961 --control-plane -v=7 --alsologtostderr: (43.6441156s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-865961 status -v=7 --alsologtostderr: (1.030324885s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.97s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-698917 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0914 17:59:35.589468  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-698917 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (51.961961415s)
--- PASS: TestJSONOutput/start/Command (51.97s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-698917 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-698917 --output=json --user=testUser
E0914 18:00:17.796356  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-698917 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-698917 --output=json --user=testUser: (5.789287032s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-420524 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-420524 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (78.270683ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"de21c082-6501-4779-b98b-f430c8bae7df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-420524] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7edff936-c7e5-40cc-9286-5c4037978c06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"bcd55b48-68b8-41cb-878b-df8db4e4be3e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"06548ef8-5263-4e95-90d7-a25d80af0d49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig"}}
	{"specversion":"1.0","id":"8e826891-b495-4ab6-a796-db715c3085f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube"}}
	{"specversion":"1.0","id":"f65d4563-4c6d-4465-9dde-813f782c9159","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c6c96552-c48e-4bd0-a300-188e5e5de209","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"06fb311b-72ee-432e-baa7-82610a0300d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-420524" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-420524
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (36.8s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-611870 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-611870 --network=: (34.653517877s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-611870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-611870
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-611870: (2.120031751s)
--- PASS: TestKicCustomNetwork/create_custom_network (36.80s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.13s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-609830 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-609830 --network=bridge: (30.511088013s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-609830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-609830
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-609830: (1.592199044s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.13s)

                                                
                                    
x
+
TestKicExistingNetwork (35.79s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-101260 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-101260 --network=existing-network: (33.646177989s)
helpers_test.go:175: Cleaning up "existing-network-101260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-101260
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-101260: (1.994412954s)
--- PASS: TestKicExistingNetwork (35.79s)

                                                
                                    
x
+
TestKicCustomSubnet (32.49s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-770059 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-770059 --subnet=192.168.60.0/24: (30.433549692s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-770059 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-770059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-770059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-770059: (2.034996564s)
--- PASS: TestKicCustomSubnet (32.49s)

                                                
                                    
x
+
TestKicStaticIP (32.44s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-953110 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-953110 --static-ip=192.168.200.200: (30.201059385s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-953110 ip
helpers_test.go:175: Cleaning up "static-ip-953110" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-953110
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-953110: (2.075364597s)
--- PASS: TestKicStaticIP (32.44s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-058673 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-058673 --driver=docker  --container-runtime=containerd: (32.927858439s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-061471 --driver=docker  --container-runtime=containerd
E0914 18:04:07.884113  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-061471 --driver=docker  --container-runtime=containerd: (31.461050264s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-058673
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-061471
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-061471" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-061471
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-061471: (2.019907542s)
helpers_test.go:175: Cleaning up "first-058673" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-058673
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-058673: (2.207302078s)
--- PASS: TestMinikubeProfile (69.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.45s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-501850 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-501850 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.443917083s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.45s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-501850 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-504197 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-504197 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.275486873s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-504197 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-501850 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-501850 --alsologtostderr -v=5: (1.624100088s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-504197 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-504197
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-504197: (1.223714328s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.47s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-504197
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-504197: (6.473154243s)
--- PASS: TestMountStart/serial/RestartStopped (7.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-504197 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-987183 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0914 18:05:17.797000  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-987183 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m5.682511877s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.18s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (18.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-987183 -- rollout status deployment/busybox: (16.290374164s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-j8ctj -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-vtg7r -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-j8ctj -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-vtg7r -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-j8ctj -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-vtg7r -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (18.18s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-j8ctj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-j8ctj -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-vtg7r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-987183 -- exec busybox-7dff88458-vtg7r -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (20.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-987183 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-987183 -v 3 --alsologtostderr: (19.895939304s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (20.58s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-987183 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E0914 18:06:40.862387  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp testdata/cp-test.txt multinode-987183:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2637423075/001/cp-test_multinode-987183.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183:/home/docker/cp-test.txt multinode-987183-m02:/home/docker/cp-test_multinode-987183_multinode-987183-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m02 "sudo cat /home/docker/cp-test_multinode-987183_multinode-987183-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183:/home/docker/cp-test.txt multinode-987183-m03:/home/docker/cp-test_multinode-987183_multinode-987183-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m03 "sudo cat /home/docker/cp-test_multinode-987183_multinode-987183-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp testdata/cp-test.txt multinode-987183-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2637423075/001/cp-test_multinode-987183-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183-m02:/home/docker/cp-test.txt multinode-987183:/home/docker/cp-test_multinode-987183-m02_multinode-987183.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183 "sudo cat /home/docker/cp-test_multinode-987183-m02_multinode-987183.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183-m02:/home/docker/cp-test.txt multinode-987183-m03:/home/docker/cp-test_multinode-987183-m02_multinode-987183-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m03 "sudo cat /home/docker/cp-test_multinode-987183-m02_multinode-987183-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp testdata/cp-test.txt multinode-987183-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2637423075/001/cp-test_multinode-987183-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183-m03:/home/docker/cp-test.txt multinode-987183:/home/docker/cp-test_multinode-987183-m03_multinode-987183.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183 "sudo cat /home/docker/cp-test_multinode-987183-m03_multinode-987183.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 cp multinode-987183-m03:/home/docker/cp-test.txt multinode-987183-m02:/home/docker/cp-test_multinode-987183-m03_multinode-987183-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 ssh -n multinode-987183-m02 "sudo cat /home/docker/cp-test_multinode-987183-m03_multinode-987183-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-987183 node stop m03: (1.213361996s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-987183 status: exit status 7 (559.862052ms)

                                                
                                                
-- stdout --
	multinode-987183
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-987183-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-987183-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-987183 status --alsologtostderr: exit status 7 (511.304259ms)

                                                
                                                
-- stdout --
	multinode-987183
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-987183-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-987183-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:06:53.317729  417712 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:06:53.317915  417712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:53.317946  417712 out.go:358] Setting ErrFile to fd 2...
	I0914 18:06:53.317966  417712 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:06:53.318231  417712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 18:06:53.318437  417712 out.go:352] Setting JSON to false
	I0914 18:06:53.318522  417712 mustload.go:65] Loading cluster: multinode-987183
	I0914 18:06:53.318620  417712 notify.go:220] Checking for updates...
	I0914 18:06:53.319007  417712 config.go:182] Loaded profile config "multinode-987183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 18:06:53.319046  417712 status.go:255] checking status of multinode-987183 ...
	I0914 18:06:53.319686  417712 cli_runner.go:164] Run: docker container inspect multinode-987183 --format={{.State.Status}}
	I0914 18:06:53.338139  417712 status.go:330] multinode-987183 host status = "Running" (err=<nil>)
	I0914 18:06:53.338161  417712 host.go:66] Checking if "multinode-987183" exists ...
	I0914 18:06:53.338462  417712 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-987183
	I0914 18:06:53.363872  417712 host.go:66] Checking if "multinode-987183" exists ...
	I0914 18:06:53.364227  417712 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 18:06:53.364270  417712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-987183
	I0914 18:06:53.383034  417712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/multinode-987183/id_rsa Username:docker}
	I0914 18:06:53.480832  417712 ssh_runner.go:195] Run: systemctl --version
	I0914 18:06:53.485014  417712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:06:53.496501  417712 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:06:53.560360  417712 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-14 18:06:53.54927213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 18:06:53.560950  417712 kubeconfig.go:125] found "multinode-987183" server: "https://192.168.67.2:8443"
	I0914 18:06:53.560985  417712 api_server.go:166] Checking apiserver status ...
	I0914 18:06:53.561033  417712 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 18:06:53.572441  417712 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1360/cgroup
	I0914 18:06:53.581789  417712 api_server.go:182] apiserver freezer: "10:freezer:/docker/a99e72044ee9e5cc6f16bad9735976611dce4b4a87395db8b39d0f2f846db486/kubepods/burstable/pod19aa5eb2288e11c75b0a7017bd96f9c7/187ec9dabf28fd0a5f8d6ef8838c8b00ba0903e460354004b0a5d23d21dd50de"
	I0914 18:06:53.581863  417712 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a99e72044ee9e5cc6f16bad9735976611dce4b4a87395db8b39d0f2f846db486/kubepods/burstable/pod19aa5eb2288e11c75b0a7017bd96f9c7/187ec9dabf28fd0a5f8d6ef8838c8b00ba0903e460354004b0a5d23d21dd50de/freezer.state
	I0914 18:06:53.590433  417712 api_server.go:204] freezer state: "THAWED"
	I0914 18:06:53.590463  417712 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 18:06:53.599069  417712 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0914 18:06:53.599099  417712 status.go:422] multinode-987183 apiserver status = Running (err=<nil>)
	I0914 18:06:53.599118  417712 status.go:257] multinode-987183 status: &{Name:multinode-987183 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 18:06:53.599137  417712 status.go:255] checking status of multinode-987183-m02 ...
	I0914 18:06:53.599446  417712 cli_runner.go:164] Run: docker container inspect multinode-987183-m02 --format={{.State.Status}}
	I0914 18:06:53.615742  417712 status.go:330] multinode-987183-m02 host status = "Running" (err=<nil>)
	I0914 18:06:53.615771  417712 host.go:66] Checking if "multinode-987183-m02" exists ...
	I0914 18:06:53.616090  417712 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-987183-m02
	I0914 18:06:53.632149  417712 host.go:66] Checking if "multinode-987183-m02" exists ...
	I0914 18:06:53.632504  417712 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 18:06:53.632563  417712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-987183-m02
	I0914 18:06:53.649035  417712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19643-292860/.minikube/machines/multinode-987183-m02/id_rsa Username:docker}
	I0914 18:06:53.744768  417712 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 18:06:53.756600  417712 status.go:257] multinode-987183-m02 status: &{Name:multinode-987183-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 18:06:53.756635  417712 status.go:255] checking status of multinode-987183-m03 ...
	I0914 18:06:53.757001  417712 cli_runner.go:164] Run: docker container inspect multinode-987183-m03 --format={{.State.Status}}
	I0914 18:06:53.774060  417712 status.go:330] multinode-987183-m03 host status = "Stopped" (err=<nil>)
	I0914 18:06:53.774086  417712 status.go:343] host is not running, skipping remaining checks
	I0914 18:06:53.774095  417712 status.go:257] multinode-987183-m03 status: &{Name:multinode-987183-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-987183 node start m03 -v=7 --alsologtostderr: (9.290893271s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-987183
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-987183
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-987183: (25.054056197s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-987183 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-987183 --wait=true -v=8 --alsologtostderr: (1m17.805808997s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-987183
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-987183 node delete m03: (4.835519188s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 stop
E0914 18:09:07.884412  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-987183 stop: (23.773755042s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-987183 status: exit status 7 (85.064491ms)

                                                
                                                
-- stdout --
	multinode-987183
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-987183-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-987183 status --alsologtostderr: exit status 7 (101.292005ms)

                                                
                                                
-- stdout --
	multinode-987183
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-987183-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:09:16.279937  426159 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:09:16.280118  426159 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:09:16.280148  426159 out.go:358] Setting ErrFile to fd 2...
	I0914 18:09:16.280172  426159 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:09:16.280445  426159 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 18:09:16.280660  426159 out.go:352] Setting JSON to false
	I0914 18:09:16.280733  426159 mustload.go:65] Loading cluster: multinode-987183
	I0914 18:09:16.280855  426159 notify.go:220] Checking for updates...
	I0914 18:09:16.281229  426159 config.go:182] Loaded profile config "multinode-987183": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 18:09:16.281269  426159 status.go:255] checking status of multinode-987183 ...
	I0914 18:09:16.281860  426159 cli_runner.go:164] Run: docker container inspect multinode-987183 --format={{.State.Status}}
	I0914 18:09:16.299893  426159 status.go:330] multinode-987183 host status = "Stopped" (err=<nil>)
	I0914 18:09:16.299913  426159 status.go:343] host is not running, skipping remaining checks
	I0914 18:09:16.299920  426159 status.go:257] multinode-987183 status: &{Name:multinode-987183 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 18:09:16.299949  426159 status.go:255] checking status of multinode-987183-m02 ...
	I0914 18:09:16.300271  426159 cli_runner.go:164] Run: docker container inspect multinode-987183-m02 --format={{.State.Status}}
	I0914 18:09:16.329975  426159 status.go:330] multinode-987183-m02 host status = "Stopped" (err=<nil>)
	I0914 18:09:16.329996  426159 status.go:343] host is not running, skipping remaining checks
	I0914 18:09:16.330003  426159 status.go:257] multinode-987183-m02 status: &{Name:multinode-987183-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-987183 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-987183 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.579241167s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-987183 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-987183
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-987183-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-987183-m02 --driver=docker  --container-runtime=containerd: exit status 14 (73.885416ms)

                                                
                                                
-- stdout --
	* [multinode-987183-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-987183-m02' is duplicated with machine name 'multinode-987183-m02' in profile 'multinode-987183'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-987183-m03 --driver=docker  --container-runtime=containerd
E0914 18:10:17.796628  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:10:30.952608  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-987183-m03 --driver=docker  --container-runtime=containerd: (33.850706708s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-987183
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-987183: exit status 80 (303.754935ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-987183 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-987183-m03 already exists in multinode-987183-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-987183-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-987183-m03: (1.99405895s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.28s)

                                                
                                    
x
+
TestPreload (114.39s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-021792 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-021792 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m16.113659213s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-021792 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-021792 image pull gcr.io/k8s-minikube/busybox: (1.981335279s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-021792
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-021792: (12.033144756s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-021792 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-021792 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (21.306221865s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-021792 image list
helpers_test.go:175: Cleaning up "test-preload-021792" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-021792
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-021792: (2.619734395s)
--- PASS: TestPreload (114.39s)

                                                
                                    
x
+
TestScheduledStopUnix (106.19s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-480370 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-480370 --memory=2048 --driver=docker  --container-runtime=containerd: (29.966273589s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-480370 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-480370 -n scheduled-stop-480370
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-480370 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-480370 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-480370 -n scheduled-stop-480370
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-480370
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-480370 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0914 18:14:07.883807  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-480370
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-480370: exit status 7 (64.238565ms)

                                                
                                                
-- stdout --
	scheduled-stop-480370
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-480370 -n scheduled-stop-480370
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-480370 -n scheduled-stop-480370: exit status 7 (68.703408ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-480370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-480370
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-480370: (4.62664867s)
--- PASS: TestScheduledStopUnix (106.19s)

                                                
                                    
x
+
TestInsufficientStorage (10.35s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-749118 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-749118 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.920949243s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d100e2d5-a59c-4532-991a-c753e3daf2b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-749118] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4f73d55a-13cb-4eca-a4cc-718209c6ceb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19643"}}
	{"specversion":"1.0","id":"2a71ac6b-bffb-4a6e-8649-2cfcd3a70f39","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"26dec319-92d2-4c6e-a31a-8b92d3eb65ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig"}}
	{"specversion":"1.0","id":"29c16a07-5940-4332-851e-a4172457643c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube"}}
	{"specversion":"1.0","id":"4943745c-83e9-4618-ba44-345a0cbad1c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"720f3b38-0774-404a-a91e-3be3fa326579","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a202f7a1-cf78-43cc-a423-5d115d4f39b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"a9b0cab7-a904-4ce3-be8c-1909e188f281","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"a1bff380-ef74-4ef1-aede-9ad1cfffaec5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"563db154-3f4c-49df-b59d-4bb98af8a363","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"991cf64e-52e4-4c95-acd4-7c2be3595530","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-749118\" primary control-plane node in \"insufficient-storage-749118\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c12fb29-3bf5-4781-8ee0-c1184ab559e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726281268-19643 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b32c8d16-7a29-4204-b50b-03bdfc4caeff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1de1ac1-e460-49f7-88cc-262db6c040a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-749118 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-749118 --output=json --layout=cluster: exit status 7 (295.673202ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-749118","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-749118","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:14:33.535050  444780 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-749118" does not appear in /home/jenkins/minikube-integration/19643-292860/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-749118 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-749118 --output=json --layout=cluster: exit status 7 (271.439077ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-749118","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-749118","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 18:14:33.811790  444840 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-749118" does not appear in /home/jenkins/minikube-integration/19643-292860/kubeconfig
	E0914 18:14:33.821802  444840 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/insufficient-storage-749118/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-749118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-749118
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-749118: (1.861906851s)
--- PASS: TestInsufficientStorage (10.35s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3461571656 start -p running-upgrade-973519 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3461571656 start -p running-upgrade-973519 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.762927901s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-973519 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-973519 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.621398237s)
helpers_test.go:175: Cleaning up "running-upgrade-973519" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-973519
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-973519: (3.110559512s)
--- PASS: TestRunningBinaryUpgrade (82.26s)

                                                
                                    
x
+
TestKubernetesUpgrade (361.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444403 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444403 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m3.938787486s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-444403
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-444403: (1.345757957s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-444403 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-444403 status --format={{.Host}}: exit status 7 (120.049556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444403 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444403 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m45.057082192s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-444403 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444403 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-444403 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (99.371708ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-444403] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-444403
	    minikube start -p kubernetes-upgrade-444403 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4444032 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-444403 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-444403 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-444403 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (8.185524723s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-444403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-444403
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-444403: (2.161873497s)
--- PASS: TestKubernetesUpgrade (361.03s)

                                                
                                    
x
+
TestMissingContainerUpgrade (179.51s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.313048656 start -p missing-upgrade-331036 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.313048656 start -p missing-upgrade-331036 --memory=2200 --driver=docker  --container-runtime=containerd: (1m39.675025874s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-331036
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-331036: (10.343574467s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-331036
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-331036 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-331036 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.511529225s)
helpers_test.go:175: Cleaning up "missing-upgrade-331036" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-331036
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-331036: (2.261429859s)
--- PASS: TestMissingContainerUpgrade (179.51s)

                                                
                                    
x
+
TestPause/serial/Start (87.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-761767 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-761767 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m27.332946723s)
--- PASS: TestPause/serial/Start (87.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-232641 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-232641 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (107.02999ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-232641] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-232641 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-232641 --driver=docker  --container-runtime=containerd: (40.124362372s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-232641 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-232641 --no-kubernetes --driver=docker  --container-runtime=containerd
E0914 18:15:17.796556  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-232641 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.956046131s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-232641 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-232641 status -o json: exit status 2 (325.233044ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-232641","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-232641
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-232641: (1.922812358s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-232641 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-232641 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.705206804s)
--- PASS: TestNoKubernetes/serial/Start (5.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-232641 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-232641 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.402916ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-232641
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-232641: (1.203967175s)
--- PASS: TestNoKubernetes/serial/Stop (1.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-232641 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-232641 --driver=docker  --container-runtime=containerd: (6.423193048s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-232641 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-232641 "sudo systemctl is-active --quiet service kubelet": exit status 1 (266.87325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-761767 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-761767 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (5.992356208s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.01s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-761767 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-761767 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-761767 --output=json --layout=cluster: exit status 2 (445.971033ms)

                                                
                                                
-- stdout --
	{"Name":"pause-761767","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-761767","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.17s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-761767 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-761767 --alsologtostderr -v=5: (1.171657337s)
--- PASS: TestPause/serial/Unpause (1.17s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.28s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-761767 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-761767 --alsologtostderr -v=5: (1.282021833s)
--- PASS: TestPause/serial/PauseAgain (1.28s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.18s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-761767 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-761767 --alsologtostderr -v=5: (3.177987876s)
--- PASS: TestPause/serial/DeletePaused (3.18s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-761767
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-761767: exit status 1 (20.737626ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-761767: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3513395507 start -p stopped-upgrade-397585 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0914 18:19:07.884051  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3513395507 start -p stopped-upgrade-397585 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (47.601150647s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3513395507 -p stopped-upgrade-397585 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3513395507 -p stopped-upgrade-397585 stop: (20.017843252s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-397585 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0914 18:20:17.797164  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-397585 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.15283213s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.77s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-397585
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-397585: (1.034524021s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-620150 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-620150 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (242.69521ms)

                                                
                                                
-- stdout --
	* [false-620150] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19643
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 18:22:21.777970  485373 out.go:345] Setting OutFile to fd 1 ...
	I0914 18:22:21.778612  485373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:22:21.778649  485373 out.go:358] Setting ErrFile to fd 2...
	I0914 18:22:21.778669  485373 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 18:22:21.778981  485373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19643-292860/.minikube/bin
	I0914 18:22:21.779508  485373 out.go:352] Setting JSON to false
	I0914 18:22:21.780449  485373 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7494,"bootTime":1726330648,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0914 18:22:21.781299  485373 start.go:139] virtualization:  
	I0914 18:22:21.785298  485373 out.go:177] * [false-620150] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0914 18:22:21.787347  485373 out.go:177]   - MINIKUBE_LOCATION=19643
	I0914 18:22:21.787394  485373 notify.go:220] Checking for updates...
	I0914 18:22:21.792561  485373 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0914 18:22:21.794796  485373 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19643-292860/kubeconfig
	I0914 18:22:21.796288  485373 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19643-292860/.minikube
	I0914 18:22:21.798294  485373 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0914 18:22:21.800161  485373 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0914 18:22:21.802625  485373 config.go:182] Loaded profile config "force-systemd-flag-560938": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0914 18:22:21.802737  485373 driver.go:394] Setting default libvirt URI to qemu:///system
	I0914 18:22:21.833052  485373 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0914 18:22:21.833178  485373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 18:22:21.925553  485373 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-14 18:22:21.914406952 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 18:22:21.925655  485373 docker.go:318] overlay module found
	I0914 18:22:21.928326  485373 out.go:177] * Using the docker driver based on user configuration
	I0914 18:22:21.930704  485373 start.go:297] selected driver: docker
	I0914 18:22:21.930723  485373 start.go:901] validating driver "docker" against <nil>
	I0914 18:22:21.930738  485373 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0914 18:22:21.933137  485373 out.go:201] 
	W0914 18:22:21.935146  485373 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0914 18:22:21.937078  485373 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-620150 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-620150" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-620150

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-620150"

                                                
                                                
----------------------- debugLogs end: false-620150 [took: 4.043915651s] --------------------------------
helpers_test.go:175: Cleaning up "false-620150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-620150
--- PASS: TestNetworkPlugins/group/false (4.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (148.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-947842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0914 18:24:07.884106  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:25:17.797294  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-947842 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m28.18014813s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (148.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-947842 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c2bc48f0-f748-44fe-9ea7-ae4d808104c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c2bc48f0-f748-44fe-9ea7-ae4d808104c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003577752s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-947842 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.60s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-947842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-947842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.219483345s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-947842 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-947842 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-947842 --alsologtostderr -v=3: (12.424382359s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.57s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-760354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-760354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m7.5721138s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.57s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-947842 -n old-k8s-version-947842
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-947842 -n old-k8s-version-947842: exit status 7 (131.965358ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-947842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-760354 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0f749911-07e4-40ec-a371-d8bfeda6110c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0f749911-07e4-40ec-a371-d8bfeda6110c] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004969403s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-760354 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-760354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-760354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.068862492s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-760354 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-760354 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-760354 --alsologtostderr -v=3: (12.091802141s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-760354 -n no-preload-760354
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-760354 -n no-preload-760354: exit status 7 (73.099438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-760354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-760354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 18:29:07.884103  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:30:17.796909  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-760354 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.250751422s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-760354 -n no-preload-760354
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wnwz4" [0a049542-ddeb-4ea9-ad08-cf69b7111bab] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004132356s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wnwz4" [0a049542-ddeb-4ea9-ad08-cf69b7111bab] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004686753s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-760354 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-760354 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-760354 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-760354 -n no-preload-760354
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-760354 -n no-preload-760354: exit status 2 (362.664673ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-760354 -n no-preload-760354
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-760354 -n no-preload-760354: exit status 2 (329.816416ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-760354 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-760354 -n no-preload-760354
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-760354 -n no-preload-760354
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (85.74s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-930089 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-930089 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m25.735180397s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (85.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ddl7d" [b72b1b01-0b24-48cf-8381-b95fc3c8f749] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005286132s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-ddl7d" [b72b1b01-0b24-48cf-8381-b95fc3c8f749] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004252926s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-947842 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-947842 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-947842 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-947842 --alsologtostderr -v=1: (1.048556446s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-947842 -n old-k8s-version-947842
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-947842 -n old-k8s-version-947842: exit status 2 (419.416454ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-947842 -n old-k8s-version-947842
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-947842 -n old-k8s-version-947842: exit status 2 (399.381169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-947842 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-947842 --alsologtostderr -v=1: (1.23493559s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-947842 -n old-k8s-version-947842
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-947842 -n old-k8s-version-947842
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-531203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 18:34:07.884462  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-531203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m19.315606183s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-930089 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [82b9b9fd-4f91-48c0-a9c6-70b681c25dbb] Pending
helpers_test.go:344: "busybox" [82b9b9fd-4f91-48c0-a9c6-70b681c25dbb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [82b9b9fd-4f91-48c0-a9c6-70b681c25dbb] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00362497s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-930089 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-930089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-930089 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.087431023s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-930089 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-930089 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-930089 --alsologtostderr -v=3: (12.16499163s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-531203 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [362cb446-96e0-497e-a5f5-27bf08884923] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [362cb446-96e0-497e-a5f5-27bf08884923] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004216927s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-531203 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-930089 -n embed-certs-930089
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-930089 -n embed-certs-930089: exit status 7 (74.76861ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-930089 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-930089 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-930089 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.603708858s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-930089 -n embed-certs-930089
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-531203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-531203 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.272595331s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-531203 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-531203 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-531203 --alsologtostderr -v=3: (12.576105905s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203: exit status 7 (78.624004ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-531203 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-531203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 18:35:17.796488  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:18.863637  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:18.870004  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:18.881380  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:18.902817  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:18.944289  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:19.025802  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:19.187421  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:19.508939  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:20.150251  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:21.432844  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:23.994444  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:29.115962  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:39.357712  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:36:59.839088  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:40.800781  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:46.258901  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:46.265333  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:46.276791  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:46.298973  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:46.340373  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:46.421862  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:46.583441  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:46.905144  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:47.547035  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:48.829292  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:51.390894  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:37:56.512844  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:38:06.754674  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:38:27.236755  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:39:02.723304  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:39:07.884093  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:39:08.198949  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-531203 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m28.981765412s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-smjgm" [69c2b984-e551-445d-a7cb-2ac43bd9e8a8] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003176584s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-smjgm" [69c2b984-e551-445d-a7cb-2ac43bd9e8a8] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004934909s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-930089 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-930089 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-930089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-930089 --alsologtostderr -v=1: (1.093228006s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-930089 -n embed-certs-930089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-930089 -n embed-certs-930089: exit status 2 (383.786286ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-930089 -n embed-certs-930089
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-930089 -n embed-certs-930089: exit status 2 (317.118127ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-930089 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-930089 -n embed-certs-930089
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-930089 -n embed-certs-930089
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-338061 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-338061 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (38.79835438s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8lvjw" [5827381b-12f7-4c40-ad6a-4fc351b89afb] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00479185s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-8lvjw" [5827381b-12f7-4c40-ad6a-4fc351b89afb] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00584729s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-531203 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-531203 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-531203 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-531203 --alsologtostderr -v=1: (1.057349664s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203: exit status 2 (387.449715ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203: exit status 2 (386.361466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-531203 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-531203 -n default-k8s-diff-port-531203
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (101.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0914 18:40:00.866381  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m41.056156249s)
--- PASS: TestNetworkPlugins/group/auto/Start (101.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-338061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-338061 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.616274133s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-338061 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-338061 --alsologtostderr -v=3: (1.336830854s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-338061 -n newest-cni-338061
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-338061 -n newest-cni-338061: exit status 7 (90.608615ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-338061 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-338061 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0914 18:40:17.796308  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/addons-478069/client.crt: no such file or directory" logger="UnhandledError"
E0914 18:40:30.121171  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-338061 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (21.411604125s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-338061 -n newest-cni-338061
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (22.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-338061 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-338061 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-338061 --alsologtostderr -v=1: (1.144045211s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-338061 -n newest-cni-338061
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-338061 -n newest-cni-338061: exit status 2 (467.831894ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-338061 -n newest-cni-338061
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-338061 -n newest-cni-338061: exit status 2 (450.773245ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-338061 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-338061 -n newest-cni-338061
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-338061 -n newest-cni-338061
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (50.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0914 18:41:18.863396  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/old-k8s-version-947842/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (50.264136312s)
--- PASS: TestNetworkPlugins/group/flannel/Start (50.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-rmc5b" [c7fae1fb-c634-486f-a4c8-c454e0d2a0ae] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004918364s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-620150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-620150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qwdr8" [45ff68db-293d-4c07-9984-5ec33e65b6e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qwdr8" [45ff68db-293d-4c07-9984-5ec33e65b6e3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004240698s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-620150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-620150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hh825" [ab60f212-99fa-46e9-9a79-1beeb7fc9e91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hh825" [ab60f212-99fa-46e9-9a79-1beeb7fc9e91] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003865707s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-620150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-620150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (50.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (50.02556741s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (50.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (55.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0914 18:42:46.258154  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (55.552458135s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (55.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-620150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-620150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-n5zph" [c94d7d06-f80a-43f8-89c4-6112fadc6ea7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-n5zph" [c94d7d06-f80a-43f8-89c4-6112fadc6ea7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004289437s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-620150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2n67w" [38e1834a-4901-4bc3-abd1-699198616d81] Running
E0914 18:43:13.963742  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/no-preload-760354/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004931992s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-620150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-620150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vnssz" [2dd83dde-8860-4b1f-af58-c3a4fadd38dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vnssz" [2dd83dde-8860-4b1f-af58-c3a4fadd38dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.00444085s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-620150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (51.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (51.347718374s)
--- PASS: TestNetworkPlugins/group/bridge/Start (51.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (66.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0914 18:44:07.884398  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/functional-884273/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m6.948302732s)
--- PASS: TestNetworkPlugins/group/calico/Start (66.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-620150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-620150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g4n4r" [cf6b4bd2-8df3-443d-918e-43ed65f90a20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g4n4r" [cf6b4bd2-8df3-443d-918e-43ed65f90a20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004387508s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-620150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0914 18:44:57.962414  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/default-k8s-diff-port-531203/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-620150 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.492978825s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-smhts" [f61a3f02-7778-4129-b148-4d4739395dc4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004656398s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-620150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-620150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qdvts" [694fd066-1c48-4dfe-9c68-9a0478eb1a54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qdvts" [694fd066-1c48-4dfe-9c68-9a0478eb1a54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.006769794s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-620150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-620150 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-620150 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nprhk" [70659941-371f-4638-a574-d5d6f89bfcc2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nprhk" [70659941-371f-4638-a574-d5d6f89bfcc2] Running
E0914 18:45:59.406596  298255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19643-292860/.minikube/profiles/default-k8s-diff-port-531203/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.013571179s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-620150 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-620150 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.55s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-877234 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-877234" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-877234
--- SKIP: TestDownloadOnlyKic (0.55s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-312206" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-312206
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-620150 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-620150" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-620150

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-620150"

                                                
                                                
----------------------- debugLogs end: kubenet-620150 [took: 4.119500573s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-620150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-620150
--- SKIP: TestNetworkPlugins/group/kubenet (4.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-620150 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-620150" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-620150

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-620150" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-620150"

                                                
                                                
----------------------- debugLogs end: cilium-620150 [took: 4.629855839s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-620150" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-620150
--- SKIP: TestNetworkPlugins/group/cilium (4.85s)

                                                
                                    
Copied to clipboard