Test Report: Docker_Linux_crio 19780

                    
                      d63f64bffc284d34b6c2581e44dece8bfcca0b7a:2024-10-09:36574
                    
                

Test fail (3/328)

Order failed test Duration
32 TestAddons/serial/GCPAuth/PullSecret 480.61
35 TestAddons/parallel/Ingress 153.57
37 TestAddons/parallel/MetricsServer 292.51
x
+
TestAddons/serial/GCPAuth/PullSecret (480.61s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/PullSecret
addons_test.go:614: (dbg) Run:  kubectl --context addons-814968 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-814968 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/PullSecret: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d22f598-c2e1-4a30-bd26-0f9952ed8024] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:329: TestAddons/serial/GCPAuth/PullSecret: WARNING: pod list for "default" "integration-test=busybox" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:627: ***** TestAddons/serial/GCPAuth/PullSecret: pod "integration-test=busybox" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:627: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-814968 -n addons-814968
addons_test.go:627: TestAddons/serial/GCPAuth/PullSecret: showing logs for failed pods as of 2024-10-09 18:57:08.74046308 +0000 UTC m=+657.781201410
addons_test.go:627: (dbg) Run:  kubectl --context addons-814968 describe po busybox -n default
addons_test.go:627: (dbg) kubectl --context addons-814968 describe po busybox -n default:
Name:             busybox
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-814968/192.168.49.2
Start Time:       Wed, 09 Oct 2024 18:49:08 +0000
Labels:           integration-test=busybox
Annotations:      <none>
Status:           Pending
IP:               10.244.0.21
IPs:
IP:  10.244.0.21
Containers:
busybox:
Container ID:  
Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
Image ID:      
Port:          <none>
Host Port:     <none>
Command:
sleep
3600
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kxsk9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kxsk9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         BestEffort
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  8m                      default-scheduler  Successfully assigned default/busybox to addons-814968
Normal   Pulling    6m24s (x4 over 8m)      kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
Warning  Failed     6m24s (x4 over 8m)      kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
Warning  Failed     6m24s (x4 over 8m)      kubelet            Error: ErrImagePull
Warning  Failed     6m11s (x6 over 7m59s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    2m52s (x21 over 7m59s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
addons_test.go:627: (dbg) Run:  kubectl --context addons-814968 logs busybox -n default
addons_test.go:627: (dbg) Non-zero exit: kubectl --context addons-814968 logs busybox -n default: exit status 1 (66.317386ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "busybox" in pod "busybox" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:627: kubectl --context addons-814968 logs busybox -n default: exit status 1
addons_test.go:629: wait: integration-test=busybox within 8m0s: context deadline exceeded
--- FAIL: TestAddons/serial/GCPAuth/PullSecret (480.61s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (153.57s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-814968 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-814968 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-814968 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [2d979590-e4d8-4466-b2a0-f94fa9fd7e9a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [2d979590-e4d8-4466-b2a0-f94fa9fd7e9a] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002943581s
I1009 18:57:47.244777   15983 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-814968 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.284142722s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-814968 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-814968
helpers_test.go:235: (dbg) docker inspect addons-814968:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057",
	        "Created": "2024-10-09T18:46:47.904681606Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18039,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-09T18:46:48.046279093Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3a8635a679ec007165247a79bf5f156508ffd34b58bfc31cc163a0cc0634bac6",
	        "ResolvConfPath": "/var/lib/docker/containers/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/hosts",
	        "LogPath": "/var/lib/docker/containers/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057-json.log",
	        "Name": "/addons-814968",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-814968:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-814968",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/437c8fface47263a5556077120b346b810bd07153f3033e0099cd9d246f528f9-init/diff:/var/lib/docker/overlay2/c60c6c9d5a0badaa1d73d2edf39e8bd73e404c1e1194546fbfceed54f9130ada/diff",
	                "MergedDir": "/var/lib/docker/overlay2/437c8fface47263a5556077120b346b810bd07153f3033e0099cd9d246f528f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/437c8fface47263a5556077120b346b810bd07153f3033e0099cd9d246f528f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/437c8fface47263a5556077120b346b810bd07153f3033e0099cd9d246f528f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-814968",
	                "Source": "/var/lib/docker/volumes/addons-814968/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-814968",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-814968",
	                "name.minikube.sigs.k8s.io": "addons-814968",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7ff64ae22a4804532e10bc0b1f204bad0baf0d0d2da3318217819eef34e7326",
	            "SandboxKey": "/var/run/docker/netns/b7ff64ae22a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-814968": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a9d38e26e32c6bce52cce30e4e79870e59f7e727468425e0a248b942225086a9",
	                    "EndpointID": "86fdf39292e2f1b44a7ea27da7b8e11a77e1e4da020b31b28866ba4d4feae27c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-814968",
	                        "1cffd86fbfa3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-814968 -n addons-814968
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 logs -n 25: (1.199093703s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-543737                                                                     | download-only-543737   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-509071                                                                     | download-only-509071   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | --download-only -p                                                                          | download-docker-242838 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | download-docker-242838                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-242838                                                                   | download-docker-242838 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-233255   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | binary-mirror-233255                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45383                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-233255                                                                     | binary-mirror-233255   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| addons  | disable dashboard -p                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-814968                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-814968                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-814968 --wait=true                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:49 UTC | 09 Oct 24 18:49 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | -p addons-814968                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-814968 ip                                                                            | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-814968 ssh curl -s                                                                   | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-814968 ssh cat                                                                       | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | /opt/local-path-provisioner/pvc-0ee2d6e6-4e3a-44c5-8adf-db1e9e8041de_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-814968 ip                                                                            | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:23.630639   17296 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:23.630737   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:23.630742   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:23.630747   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:23.630900   17296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 18:46:23.631510   17296 out.go:352] Setting JSON to false
	I1009 18:46:23.632319   17296 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1735,"bootTime":1728497849,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:46:23.632413   17296 start.go:139] virtualization: kvm guest
	I1009 18:46:23.634356   17296 out.go:177] * [addons-814968] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 18:46:23.635505   17296 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:46:23.635545   17296 notify.go:220] Checking for updates...
	I1009 18:46:23.637620   17296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:23.638770   17296 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 18:46:23.639791   17296 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	I1009 18:46:23.640874   17296 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:46:23.642015   17296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:46:23.643369   17296 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:23.666278   17296 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:23.666356   17296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:23.709451   17296 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-09 18:46:23.700447089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:46:23.709552   17296 docker.go:318] overlay module found
	I1009 18:46:23.711274   17296 out.go:177] * Using the docker driver based on user configuration
	I1009 18:46:23.712309   17296 start.go:297] selected driver: docker
	I1009 18:46:23.712327   17296 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:23.712338   17296 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:46:23.713152   17296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:23.761848   17296 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-09 18:46:23.753565989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:46:23.761997   17296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:23.762234   17296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:46:23.763755   17296 out.go:177] * Using Docker driver with root privileges
	I1009 18:46:23.764719   17296 cni.go:84] Creating CNI manager for ""
	I1009 18:46:23.764776   17296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:23.764785   17296 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:23.764850   17296 start.go:340] cluster config:
	{Name:addons-814968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:23.765955   17296 out.go:177] * Starting "addons-814968" primary control-plane node in "addons-814968" cluster
	I1009 18:46:23.766857   17296 cache.go:121] Beginning downloading kic base image for docker with crio
	I1009 18:46:23.768003   17296 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:23.769160   17296 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:23.769185   17296 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:23.769210   17296 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:46:23.769235   17296 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:23.769351   17296 preload.go:172] Found /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:46:23.769367   17296 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 18:46:23.769788   17296 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/config.json ...
	I1009 18:46:23.769830   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/config.json: {Name:mkfbea350396646be2581c2f722a4c2a0580f2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:23.784895   17296 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:23.785021   17296 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:23.785041   17296 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1009 18:46:23.785049   17296 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1009 18:46:23.785056   17296 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1009 18:46:23.785063   17296 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1009 18:46:35.553259   17296 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1009 18:46:35.553297   17296 cache.go:194] Successfully downloaded all kic artifacts
	I1009 18:46:35.553331   17296 start.go:360] acquireMachinesLock for addons-814968: {Name:mk93a1915d4c29d52bf51bdf1943947d947876d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:46:35.553427   17296 start.go:364] duration metric: took 77.389µs to acquireMachinesLock for "addons-814968"
	I1009 18:46:35.553454   17296 start.go:93] Provisioning new machine with config: &{Name:addons-814968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:46:35.553540   17296 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:46:35.555171   17296 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1009 18:46:35.555380   17296 start.go:159] libmachine.API.Create for "addons-814968" (driver="docker")
	I1009 18:46:35.555416   17296 client.go:168] LocalClient.Create starting
	I1009 18:46:35.555489   17296 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem
	I1009 18:46:35.811322   17296 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/cert.pem
	I1009 18:46:36.053584   17296 cli_runner.go:164] Run: docker network inspect addons-814968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:46:36.069217   17296 cli_runner.go:211] docker network inspect addons-814968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:46:36.069293   17296 network_create.go:284] running [docker network inspect addons-814968] to gather additional debugging logs...
	I1009 18:46:36.069313   17296 cli_runner.go:164] Run: docker network inspect addons-814968
	W1009 18:46:36.084931   17296 cli_runner.go:211] docker network inspect addons-814968 returned with exit code 1
	I1009 18:46:36.084959   17296 network_create.go:287] error running [docker network inspect addons-814968]: docker network inspect addons-814968: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-814968 not found
	I1009 18:46:36.084971   17296 network_create.go:289] output of [docker network inspect addons-814968]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-814968 not found
	
	** /stderr **
	I1009 18:46:36.085053   17296 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:46:36.100627   17296 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7aaf0}
	I1009 18:46:36.100670   17296 network_create.go:124] attempt to create docker network addons-814968 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:46:36.100709   17296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-814968 addons-814968
	I1009 18:46:36.160976   17296 network_create.go:108] docker network addons-814968 192.168.49.0/24 created
	I1009 18:46:36.161007   17296 kic.go:121] calculated static IP "192.168.49.2" for the "addons-814968" container
	I1009 18:46:36.161059   17296 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:46:36.175893   17296 cli_runner.go:164] Run: docker volume create addons-814968 --label name.minikube.sigs.k8s.io=addons-814968 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:46:36.193284   17296 oci.go:103] Successfully created a docker volume addons-814968
	I1009 18:46:36.193352   17296 cli_runner.go:164] Run: docker run --rm --name addons-814968-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-814968 --entrypoint /usr/bin/test -v addons-814968:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1009 18:46:43.450541   17296 cli_runner.go:217] Completed: docker run --rm --name addons-814968-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-814968 --entrypoint /usr/bin/test -v addons-814968:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (7.257148951s)
	I1009 18:46:43.450572   17296 oci.go:107] Successfully prepared a docker volume addons-814968
	I1009 18:46:43.450609   17296 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:43.450633   17296 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:46:43.450689   17296 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-814968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:46:47.842712   17296 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-814968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.391983265s)
	I1009 18:46:47.842742   17296 kic.go:203] duration metric: took 4.392107004s to extract preloaded images to volume ...
	W1009 18:46:47.842858   17296 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 18:46:47.842946   17296 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:46:47.888993   17296 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-814968 --name addons-814968 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-814968 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-814968 --network addons-814968 --ip 192.168.49.2 --volume addons-814968:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1009 18:46:48.197868   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Running}}
	I1009 18:46:48.214902   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:46:48.235537   17296 cli_runner.go:164] Run: docker exec addons-814968 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:46:48.278675   17296 oci.go:144] the created container "addons-814968" has a running status.
	I1009 18:46:48.278701   17296 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa...
	I1009 18:46:48.435493   17296 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:46:48.456834   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:46:48.476231   17296 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:46:48.476269   17296 kic_runner.go:114] Args: [docker exec --privileged addons-814968 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:46:48.533390   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:46:48.559834   17296 machine.go:93] provisionDockerMachine start ...
	I1009 18:46:48.559926   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:48.580989   17296 main.go:141] libmachine: Using SSH client type: native
	I1009 18:46:48.581181   17296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:46:48.581192   17296 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:46:48.826574   17296 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-814968
	
	I1009 18:46:48.826606   17296 ubuntu.go:169] provisioning hostname "addons-814968"
	I1009 18:46:48.826681   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:48.845342   17296 main.go:141] libmachine: Using SSH client type: native
	I1009 18:46:48.845506   17296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:46:48.845521   17296 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-814968 && echo "addons-814968" | sudo tee /etc/hostname
	I1009 18:46:48.997938   17296 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-814968
	
	I1009 18:46:48.998016   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.016092   17296 main.go:141] libmachine: Using SSH client type: native
	I1009 18:46:49.016264   17296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:46:49.016280   17296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-814968' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-814968/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-814968' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:46:49.151727   17296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:46:49.151754   17296 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9209/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9209/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9209/.minikube}
	I1009 18:46:49.151775   17296 ubuntu.go:177] setting up certificates
	I1009 18:46:49.151788   17296 provision.go:84] configureAuth start
	I1009 18:46:49.151844   17296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-814968
	I1009 18:46:49.170541   17296 provision.go:143] copyHostCerts
	I1009 18:46:49.170625   17296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9209/.minikube/ca.pem (1078 bytes)
	I1009 18:46:49.170734   17296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9209/.minikube/cert.pem (1123 bytes)
	I1009 18:46:49.170791   17296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9209/.minikube/key.pem (1675 bytes)
	I1009 18:46:49.170839   17296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9209/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca-key.pem org=jenkins.addons-814968 san=[127.0.0.1 192.168.49.2 addons-814968 localhost minikube]
	I1009 18:46:49.293598   17296 provision.go:177] copyRemoteCerts
	I1009 18:46:49.293661   17296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:46:49.293697   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.311068   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:49.412073   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:46:49.434720   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:46:49.457651   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:46:49.479853   17296 provision.go:87] duration metric: took 328.05344ms to configureAuth
	I1009 18:46:49.479879   17296 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:46:49.480030   17296 config.go:182] Loaded profile config "addons-814968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:46:49.480118   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.496977   17296 main.go:141] libmachine: Using SSH client type: native
	I1009 18:46:49.497143   17296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:46:49.497159   17296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:46:49.724509   17296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:46:49.724535   17296 machine.go:96] duration metric: took 1.164678329s to provisionDockerMachine
	I1009 18:46:49.724548   17296 client.go:171] duration metric: took 14.169123577s to LocalClient.Create
	I1009 18:46:49.724568   17296 start.go:167] duration metric: took 14.169186307s to libmachine.API.Create "addons-814968"
	I1009 18:46:49.724581   17296 start.go:293] postStartSetup for "addons-814968" (driver="docker")
	I1009 18:46:49.724596   17296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:46:49.724673   17296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:46:49.724720   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.741883   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:49.839970   17296 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:46:49.843041   17296 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:46:49.843072   17296 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:46:49.843080   17296 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:46:49.843086   17296 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1009 18:46:49.843098   17296 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9209/.minikube/addons for local assets ...
	I1009 18:46:49.843151   17296 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9209/.minikube/files for local assets ...
	I1009 18:46:49.843176   17296 start.go:296] duration metric: took 118.585501ms for postStartSetup
	I1009 18:46:49.843472   17296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-814968
	I1009 18:46:49.860805   17296 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/config.json ...
	I1009 18:46:49.861060   17296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:46:49.861100   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.877297   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:49.967778   17296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:46:49.971777   17296 start.go:128] duration metric: took 14.41822137s to createHost
	I1009 18:46:49.971801   17296 start.go:83] releasing machines lock for "addons-814968", held for 14.418362266s
	I1009 18:46:49.971869   17296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-814968
	I1009 18:46:49.988762   17296 ssh_runner.go:195] Run: cat /version.json
	I1009 18:46:49.988789   17296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:46:49.988811   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.988841   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:50.006480   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:50.007223   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:50.177877   17296 ssh_runner.go:195] Run: systemctl --version
	I1009 18:46:50.181945   17296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:46:50.320325   17296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:46:50.324388   17296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:46:50.342667   17296 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:46:50.342737   17296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:46:50.368956   17296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:46:50.368979   17296 start.go:495] detecting cgroup driver to use...
	I1009 18:46:50.369009   17296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 18:46:50.369044   17296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:46:50.382328   17296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:46:50.392195   17296 docker.go:217] disabling cri-docker service (if available) ...
	I1009 18:46:50.392241   17296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:46:50.404432   17296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:46:50.417348   17296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:46:50.492127   17296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:46:50.567434   17296 docker.go:233] disabling docker service ...
	I1009 18:46:50.567489   17296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:46:50.584044   17296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:46:50.594617   17296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:46:50.672151   17296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:46:50.759493   17296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:46:50.770179   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:46:50.784254   17296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 18:46:50.784312   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.793299   17296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:46:50.793361   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.801925   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.810799   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.819300   17296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:46:50.827359   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.835535   17296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.848964   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.857549   17296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:46:50.864612   17296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:46:50.864657   17296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:46:50.877022   17296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:46:50.884627   17296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:46:50.954295   17296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:46:51.069922   17296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:46:51.069992   17296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:46:51.073466   17296 start.go:563] Will wait 60s for crictl version
	I1009 18:46:51.073515   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:46:51.076745   17296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:46:51.108427   17296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 18:46:51.108542   17296 ssh_runner.go:195] Run: crio --version
	I1009 18:46:51.142790   17296 ssh_runner.go:195] Run: crio --version
	I1009 18:46:51.178659   17296 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1009 18:46:51.179851   17296 cli_runner.go:164] Run: docker network inspect addons-814968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:46:51.196458   17296 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:46:51.199800   17296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:46:51.210543   17296 kubeadm.go:883] updating cluster {Name:addons-814968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:46:51.210687   17296 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:51.210769   17296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:46:51.272988   17296 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:46:51.273009   17296 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:46:51.273048   17296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:46:51.303644   17296 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:46:51.303665   17296 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:46:51.303677   17296 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1009 18:46:51.303765   17296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-814968 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:46:51.303821   17296 ssh_runner.go:195] Run: crio config
	I1009 18:46:51.343005   17296 cni.go:84] Creating CNI manager for ""
	I1009 18:46:51.343026   17296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:51.343041   17296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 18:46:51.343063   17296 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-814968 NodeName:addons-814968 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:46:51.343188   17296 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-814968"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:46:51.343269   17296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 18:46:51.351467   17296 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:46:51.351542   17296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:46:51.359309   17296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 18:46:51.374895   17296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:46:51.390849   17296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1009 18:46:51.406596   17296 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:46:51.409776   17296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:46:51.419588   17296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:46:51.491686   17296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:46:51.503923   17296 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968 for IP: 192.168.49.2
	I1009 18:46:51.503947   17296 certs.go:194] generating shared ca certs ...
	I1009 18:46:51.503968   17296 certs.go:226] acquiring lock for ca certs: {Name:mkb239be22b48fcec8220567bb09be367227c7bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.504090   17296 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9209/.minikube/ca.key
	I1009 18:46:51.586588   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/ca.crt ...
	I1009 18:46:51.586615   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/ca.crt: {Name:mk9017172016aab041c9d0974cc54ec89ffe8046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.586796   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/ca.key ...
	I1009 18:46:51.586820   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/ca.key: {Name:mkcc0e54630796737c7e4ca6bb840db75ecb2612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.586927   17296 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.key
	I1009 18:46:51.760127   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.crt ...
	I1009 18:46:51.760157   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.crt: {Name:mkdc51782eb792306c095a5b9e06ed936f4f9db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.760330   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.key ...
	I1009 18:46:51.760341   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.key: {Name:mk1c16156690aff81e7166e5eeab1762de0e570a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.760408   17296 certs.go:256] generating profile certs ...
	I1009 18:46:51.760461   17296 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.key
	I1009 18:46:51.760483   17296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt with IP's: []
	I1009 18:46:51.892598   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt ...
	I1009 18:46:51.892628   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: {Name:mkb6c9da8d44cf533327e70f97d5cdfad57104a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.892795   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.key ...
	I1009 18:46:51.892806   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.key: {Name:mk2cabfe4365d7f47d7f418a481b3f7a5010b79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.892873   17296 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key.c00f15e1
	I1009 18:46:51.892890   17296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt.c00f15e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:46:52.042557   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt.c00f15e1 ...
	I1009 18:46:52.042591   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt.c00f15e1: {Name:mkee029f9a4ac259898be0f264b9384438234bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:52.042759   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key.c00f15e1 ...
	I1009 18:46:52.042773   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key.c00f15e1: {Name:mk6cdc8fdee962c0eb559ed3b23b985af4d63b00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:52.042853   17296 certs.go:381] copying /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt.c00f15e1 -> /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt
	I1009 18:46:52.042922   17296 certs.go:385] copying /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key.c00f15e1 -> /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key
	I1009 18:46:52.042967   17296 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.key
	I1009 18:46:52.042983   17296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.crt with IP's: []
	I1009 18:46:52.189346   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.crt ...
	I1009 18:46:52.189383   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.crt: {Name:mk1982ae3e0f7bd30a28be5ea07e23a663ec466f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:52.189549   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.key ...
	I1009 18:46:52.189565   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.key: {Name:mk9d212f973bc5ced33898bf3a0e82c2483498f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:52.189811   17296 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:46:52.189859   17296 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:46:52.189898   17296 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:46:52.189932   17296 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/key.pem (1675 bytes)
	I1009 18:46:52.190530   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:46:52.214550   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:46:52.236308   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:46:52.258110   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 18:46:52.279209   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:46:52.300274   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:46:52.321280   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:46:52.342050   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:46:52.362928   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:46:52.383777   17296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:46:52.399238   17296 ssh_runner.go:195] Run: openssl version
	I1009 18:46:52.404160   17296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:46:52.412993   17296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:46:52.416271   17296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:46:52.416327   17296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:46:52.422761   17296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:46:52.431548   17296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:46:52.434469   17296 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:46:52.434514   17296 kubeadm.go:392] StartCluster: {Name:addons-814968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:52.434588   17296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:46:52.434629   17296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:46:52.467544   17296 cri.go:89] found id: ""
	I1009 18:46:52.467598   17296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:46:52.475527   17296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:46:52.483237   17296 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:46:52.483299   17296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:46:52.490906   17296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:46:52.490926   17296 kubeadm.go:157] found existing configuration files:
	
	I1009 18:46:52.490968   17296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:46:52.498550   17296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:46:52.498605   17296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:46:52.506064   17296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:46:52.514186   17296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:46:52.514241   17296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:46:52.521731   17296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:46:52.529439   17296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:46:52.529495   17296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:46:52.537186   17296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:46:52.544985   17296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:46:52.545048   17296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:46:52.553329   17296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:46:52.586008   17296 kubeadm.go:310] W1009 18:46:52.585245    1292 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:46:52.586320   17296 kubeadm.go:310] W1009 18:46:52.585837    1292 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:46:52.603599   17296 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I1009 18:46:52.650752   17296 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:47:01.045615   17296 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 18:47:01.045706   17296 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 18:47:01.045826   17296 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:47:01.045897   17296 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I1009 18:47:01.045941   17296 kubeadm.go:310] OS: Linux
	I1009 18:47:01.046019   17296 kubeadm.go:310] CGROUPS_CPU: enabled
	I1009 18:47:01.046094   17296 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1009 18:47:01.046163   17296 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1009 18:47:01.046242   17296 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1009 18:47:01.046314   17296 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1009 18:47:01.046384   17296 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1009 18:47:01.046442   17296 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1009 18:47:01.046506   17296 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1009 18:47:01.046598   17296 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1009 18:47:01.046698   17296 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:47:01.046841   17296 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:47:01.046951   17296 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:47:01.047047   17296 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:47:01.048804   17296 out.go:235]   - Generating certificates and keys ...
	I1009 18:47:01.048908   17296 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 18:47:01.048995   17296 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 18:47:01.049086   17296 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:47:01.049155   17296 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:47:01.049211   17296 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:47:01.049262   17296 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 18:47:01.049317   17296 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 18:47:01.049461   17296 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-814968 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:01.049545   17296 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 18:47:01.049681   17296 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-814968 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:01.049764   17296 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:47:01.049850   17296 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:47:01.049906   17296 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 18:47:01.049980   17296 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:47:01.050054   17296 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:47:01.050133   17296 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:47:01.050213   17296 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:47:01.050303   17296 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:47:01.050381   17296 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:47:01.050483   17296 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:47:01.050586   17296 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:47:01.052273   17296 out.go:235]   - Booting up control plane ...
	I1009 18:47:01.052358   17296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:47:01.052426   17296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:47:01.052498   17296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:47:01.052614   17296 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:47:01.052698   17296 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:47:01.052739   17296 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 18:47:01.052845   17296 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:47:01.052950   17296 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:47:01.053019   17296 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.994243ms
	I1009 18:47:01.053116   17296 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 18:47:01.053167   17296 kubeadm.go:310] [api-check] The API server is healthy after 4.001905081s
	I1009 18:47:01.053259   17296 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:47:01.053371   17296 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:47:01.053421   17296 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:47:01.053581   17296 kubeadm.go:310] [mark-control-plane] Marking the node addons-814968 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:47:01.053631   17296 kubeadm.go:310] [bootstrap-token] Using token: a7saxq.a7xvj50z3lneobes
	I1009 18:47:01.055101   17296 out.go:235]   - Configuring RBAC rules ...
	I1009 18:47:01.055226   17296 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:47:01.055309   17296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:47:01.055428   17296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:47:01.055554   17296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:47:01.055654   17296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:47:01.055725   17296 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:47:01.055818   17296 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:47:01.055856   17296 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 18:47:01.055895   17296 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 18:47:01.055904   17296 kubeadm.go:310] 
	I1009 18:47:01.055957   17296 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 18:47:01.055963   17296 kubeadm.go:310] 
	I1009 18:47:01.056028   17296 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 18:47:01.056036   17296 kubeadm.go:310] 
	I1009 18:47:01.056065   17296 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 18:47:01.056114   17296 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:47:01.056161   17296 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:47:01.056167   17296 kubeadm.go:310] 
	I1009 18:47:01.056215   17296 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 18:47:01.056222   17296 kubeadm.go:310] 
	I1009 18:47:01.056265   17296 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:47:01.056272   17296 kubeadm.go:310] 
	I1009 18:47:01.056329   17296 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 18:47:01.056419   17296 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:47:01.056513   17296 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:47:01.056526   17296 kubeadm.go:310] 
	I1009 18:47:01.056626   17296 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:47:01.056794   17296 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 18:47:01.056815   17296 kubeadm.go:310] 
	I1009 18:47:01.056896   17296 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a7saxq.a7xvj50z3lneobes \
	I1009 18:47:01.056985   17296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0f019d0380fedf73af6bbd9730211a8845b5739fb8c36385f8ca038fee98ec96 \
	I1009 18:47:01.057004   17296 kubeadm.go:310] 	--control-plane 
	I1009 18:47:01.057010   17296 kubeadm.go:310] 
	I1009 18:47:01.057087   17296 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:47:01.057094   17296 kubeadm.go:310] 
	I1009 18:47:01.057161   17296 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a7saxq.a7xvj50z3lneobes \
	I1009 18:47:01.057256   17296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0f019d0380fedf73af6bbd9730211a8845b5739fb8c36385f8ca038fee98ec96 
	I1009 18:47:01.057288   17296 cni.go:84] Creating CNI manager for ""
	I1009 18:47:01.057294   17296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:47:01.058783   17296 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 18:47:01.060104   17296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 18:47:01.063803   17296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 18:47:01.063819   17296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 18:47:01.080841   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 18:47:01.270028   17296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:47:01.270088   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:01.270114   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-814968 minikube.k8s.io/updated_at=2024_10_09T18_47_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=addons-814968 minikube.k8s.io/primary=true
	I1009 18:47:01.351931   17296 ops.go:34] apiserver oom_adj: -16
	I1009 18:47:01.352051   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:01.853135   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:02.352356   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:02.852505   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:03.352263   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:03.853151   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:04.352450   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:04.852424   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:05.352318   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:05.852544   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:05.913453   17296 kubeadm.go:1113] duration metric: took 4.643431535s to wait for elevateKubeSystemPrivileges
	I1009 18:47:05.913490   17296 kubeadm.go:394] duration metric: took 13.478978532s to StartCluster
	I1009 18:47:05.913519   17296 settings.go:142] acquiring lock: {Name:mk1ea3be815dc8fdbed3ad1d456d5a6e32d5dcd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:05.913619   17296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 18:47:05.913952   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/kubeconfig: {Name:mk025fb048f06803d5f7ce2799ddfa736e063e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:05.914122   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:47:05.914130   17296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:47:05.914214   17296 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:47:05.914333   17296 addons.go:69] Setting yakd=true in profile "addons-814968"
	I1009 18:47:05.914343   17296 config.go:182] Loaded profile config "addons-814968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:05.914351   17296 addons.go:234] Setting addon yakd=true in "addons-814968"
	I1009 18:47:05.914347   17296 addons.go:69] Setting default-storageclass=true in profile "addons-814968"
	I1009 18:47:05.914376   17296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-814968"
	I1009 18:47:05.914384   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914385   17296 addons.go:69] Setting cloud-spanner=true in profile "addons-814968"
	I1009 18:47:05.914396   17296 addons.go:234] Setting addon cloud-spanner=true in "addons-814968"
	I1009 18:47:05.914401   17296 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-814968"
	I1009 18:47:05.914421   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914410   17296 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-814968"
	I1009 18:47:05.914440   17296 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-814968"
	I1009 18:47:05.914445   17296 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-814968"
	I1009 18:47:05.914475   17296 addons.go:69] Setting ingress-dns=true in profile "addons-814968"
	I1009 18:47:05.914489   17296 addons.go:69] Setting inspektor-gadget=true in profile "addons-814968"
	I1009 18:47:05.914498   17296 addons.go:234] Setting addon ingress-dns=true in "addons-814968"
	I1009 18:47:05.914501   17296 addons.go:234] Setting addon inspektor-gadget=true in "addons-814968"
	I1009 18:47:05.914502   17296 addons.go:69] Setting volcano=true in profile "addons-814968"
	I1009 18:47:05.914516   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914524   17296 addons.go:234] Setting addon volcano=true in "addons-814968"
	I1009 18:47:05.914526   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914551   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914747   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914758   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914911   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914911   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914913   17296 addons.go:69] Setting volumesnapshots=true in profile "addons-814968"
	I1009 18:47:05.914929   17296 addons.go:234] Setting addon volumesnapshots=true in "addons-814968"
	I1009 18:47:05.914955   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914959   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914975   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.915019   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.915351   17296 addons.go:69] Setting registry=true in profile "addons-814968"
	I1009 18:47:05.915376   17296 addons.go:234] Setting addon registry=true in "addons-814968"
	I1009 18:47:05.915377   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.915404   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.915597   17296 addons.go:69] Setting gcp-auth=true in profile "addons-814968"
	I1009 18:47:05.915646   17296 mustload.go:65] Loading cluster: addons-814968
	I1009 18:47:05.915671   17296 addons.go:69] Setting ingress=true in profile "addons-814968"
	I1009 18:47:05.915696   17296 addons.go:234] Setting addon ingress=true in "addons-814968"
	I1009 18:47:05.915750   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.915883   17296 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-814968"
	I1009 18:47:05.915915   17296 config.go:182] Loaded profile config "addons-814968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:05.915934   17296 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-814968"
	I1009 18:47:05.915960   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.916186   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.916228   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.916308   17296 addons.go:69] Setting storage-provisioner=true in profile "addons-814968"
	I1009 18:47:05.916334   17296 addons.go:234] Setting addon storage-provisioner=true in "addons-814968"
	I1009 18:47:05.916360   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.916390   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.916653   17296 out.go:177] * Verifying Kubernetes components...
	I1009 18:47:05.916714   17296 addons.go:69] Setting metrics-server=true in profile "addons-814968"
	I1009 18:47:05.916730   17296 addons.go:234] Setting addon metrics-server=true in "addons-814968"
	I1009 18:47:05.916761   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914479   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.918399   17296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:05.947973   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.947988   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.948546   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.949133   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.978180   17296 addons.go:234] Setting addon default-storageclass=true in "addons-814968"
	I1009 18:47:05.978236   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.978757   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.982236   17296 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1009 18:47:05.982876   17296 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-814968"
	I1009 18:47:05.982923   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.983826   17296 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:47:05.984321   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.986094   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:47:05.986115   17296 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:47:05.986188   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:05.986581   17296 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:05.986597   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:47:05.986637   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:05.994912   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1009 18:47:05.995373   17296 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:47:05.996589   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:47:05.996613   17296 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:47:05.996677   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:05.998086   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:06.001545   17296 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1009 18:47:06.003173   17296 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:06.003225   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:47:06.003284   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.016009   17296 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1009 18:47:06.016038   17296 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1009 18:47:06.016168   17296 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1009 18:47:06.018668   17296 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:06.018714   17296 out.go:177]   - Using image docker.io/registry:2.8.3
	I1009 18:47:06.018689   17296 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1009 18:47:06.018906   17296 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1009 18:47:06.018994   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.020325   17296 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:47:06.020350   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:47:06.020421   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.021695   17296 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:06.023344   17296 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:06.023366   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:47:06.023422   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.031303   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:47:06.033309   17296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:47:06.033310   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:47:06.034244   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.034798   17296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:06.034816   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:47:06.034870   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.037963   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.040217   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:47:06.041863   17296 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:47:06.043823   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:47:06.043927   17296 out.go:177]   - Using image docker.io/busybox:stable
	I1009 18:47:06.045276   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:47:06.045496   17296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:06.045517   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:47:06.045584   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.045839   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.047095   17296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:06.047118   17296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:47:06.047164   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.047669   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:47:06.049262   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:47:06.051269   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:47:06.051468   17296 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1009 18:47:06.053211   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:47:06.053233   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:47:06.053307   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.065549   17296 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:47:06.067758   17296 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:47:06.067326   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.068248   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.070588   17296 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1009 18:47:06.076532   17296 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:06.076568   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1009 18:47:06.076637   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.087177   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.092780   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.094972   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.095271   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.098312   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.098493   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.103259   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.104846   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.106504   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	W1009 18:47:06.130372   17296 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1009 18:47:06.130410   17296 retry.go:31] will retry after 363.568695ms: ssh: handshake failed: EOF
	I1009 18:47:06.248477   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:47:06.248636   17296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:47:06.425647   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:06.430920   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:47:06.430949   17296 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:47:06.440918   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:06.530026   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:06.530507   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:47:06.530576   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:47:06.536325   17296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:47:06.536411   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:47:06.542301   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:06.544180   17296 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:47:06.544252   17296 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:47:06.634039   17296 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:47:06.634063   17296 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:47:06.634162   17296 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1009 18:47:06.634169   17296 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1009 18:47:06.638142   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:06.725440   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:47:06.725488   17296 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:47:06.741571   17296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:47:06.741601   17296 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:47:06.743093   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:06.825477   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:47:06.825525   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:47:06.829207   17296 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:47:06.829330   17296 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:47:06.842990   17296 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1009 18:47:06.843095   17296 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1009 18:47:06.925395   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:47:06.925483   17296 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:47:06.938857   17296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:06.938946   17296 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:47:06.948788   17296 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:06.948815   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:47:07.024885   17296 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:47:07.024914   17296 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:47:07.045213   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:47:07.045255   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:47:07.128395   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:07.146835   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:07.226324   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:07.226348   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:47:07.226624   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:47:07.226637   17296 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:47:07.231415   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:07.339843   17296 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1009 18:47:07.339925   17296 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1009 18:47:07.425290   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:07.432103   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:47:07.432195   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:47:07.532048   17296 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:07.532138   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:47:07.631412   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:47:07.631497   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:47:07.729557   17296 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.480887366s)
	I1009 18:47:07.730664   17296 node_ready.go:35] waiting up to 6m0s for node "addons-814968" to be "Ready" ...
	I1009 18:47:07.730947   17296 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.482432475s)
	I1009 18:47:07.730997   17296 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 18:47:07.744576   17296 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1009 18:47:07.744653   17296 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1009 18:47:07.828462   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:07.941174   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:47:07.941258   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:47:08.027189   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.601503525s)
	I1009 18:47:08.226391   17296 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1009 18:47:08.226419   17296 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1009 18:47:08.325581   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:47:08.325611   17296 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:47:08.342390   17296 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-814968" context rescaled to 1 replicas
	I1009 18:47:08.540370   17296 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1009 18:47:08.540465   17296 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1009 18:47:08.543260   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:47:08.543326   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:47:08.733096   17296 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:47:08.733125   17296 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1009 18:47:08.836750   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:47:08.836779   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:47:08.942110   17296 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:08.942142   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1009 18:47:09.124663   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:09.124692   17296 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:47:09.228590   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:09.231463   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:09.328731   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.887710583s)
	I1009 18:47:09.748850   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:10.134369   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.60423985s)
	I1009 18:47:10.145121   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.6027235s)
	I1009 18:47:10.145148   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.50697111s)
	I1009 18:47:10.145189   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.402074182s)
	W1009 18:47:10.244988   17296 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1009 18:47:11.936422   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.807987062s)
	I1009 18:47:11.936803   17296 addons.go:475] Verifying addon ingress=true in "addons-814968"
	I1009 18:47:11.936809   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.511471288s)
	I1009 18:47:11.936742   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.705247522s)
	I1009 18:47:11.936985   17296 addons.go:475] Verifying addon registry=true in "addons-814968"
	I1009 18:47:11.936769   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.789811994s)
	I1009 18:47:11.937025   17296 addons.go:475] Verifying addon metrics-server=true in "addons-814968"
	I1009 18:47:11.938608   17296 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-814968 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:47:11.939531   17296 out.go:177] * Verifying ingress addon...
	I1009 18:47:11.939536   17296 out.go:177] * Verifying registry addon...
	I1009 18:47:11.942036   17296 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:47:11.942228   17296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:47:11.947045   17296 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:47:11.947068   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:11.947367   17296 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:47:11.947389   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:12.236367   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:12.446623   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:12.447185   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:12.448705   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.620106938s)
	W1009 18:47:12.448745   17296 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:12.448768   17296 retry.go:31] will retry after 174.33179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:12.448856   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.220230622s)
	I1009 18:47:12.624011   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:12.945193   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:12.946209   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:13.230751   17296 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:47:13.230831   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:13.252020   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.020456466s)
	I1009 18:47:13.252057   17296 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-814968"
	I1009 18:47:13.252473   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:13.253926   17296 out.go:177] * Verifying csi-hostpath-driver addon...
	I1009 18:47:13.255942   17296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:47:13.263890   17296 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:47:13.263913   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:13.438845   17296 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:47:13.445828   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:13.446302   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:13.457727   17296 addons.go:234] Setting addon gcp-auth=true in "addons-814968"
	I1009 18:47:13.457798   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:13.458126   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:13.477774   17296 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:47:13.477835   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:13.496978   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:13.759722   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:13.945407   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:13.945935   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:14.259539   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:14.444793   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:14.445361   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:14.733302   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:14.759286   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:14.946070   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:14.946533   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:15.260324   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:15.449667   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:15.525623   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:15.824818   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:15.945549   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:15.946194   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:16.133728   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.509660538s)
	I1009 18:47:16.133773   17296 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.655964577s)
	I1009 18:47:16.136119   17296 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1009 18:47:16.138068   17296 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:16.139767   17296 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:47:16.139793   17296 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:47:16.158836   17296 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:47:16.158860   17296 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:47:16.176122   17296 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:16.176146   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:47:16.192803   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:16.259502   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:16.445021   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:16.445717   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:16.734774   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:16.759897   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:16.836262   17296 addons.go:475] Verifying addon gcp-auth=true in "addons-814968"
	I1009 18:47:16.837691   17296 out.go:177] * Verifying gcp-auth addon...
	I1009 18:47:16.839938   17296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:47:16.859872   17296 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:47:16.859896   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:16.945945   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:16.946486   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:17.259298   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:17.343779   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:17.445759   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:17.446039   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:17.759321   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:17.843703   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:17.945506   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:17.945984   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:18.260214   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:18.343502   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:18.444786   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:18.445160   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:18.759105   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:18.843238   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:18.945379   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:18.945961   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:19.233259   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:19.259504   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:19.342817   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:19.446796   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:19.447700   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:19.759022   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:19.843137   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:19.945481   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:19.945976   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:20.259635   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:20.342794   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:20.445448   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:20.445780   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:20.759896   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:20.843377   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:20.944634   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:20.945217   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:21.234094   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:21.259684   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:21.343167   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:21.445782   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:21.446075   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:21.759138   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:21.843535   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:21.945086   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:21.945511   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:22.259434   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:22.342828   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:22.445436   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:22.445892   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:22.758821   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:22.842930   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:22.945533   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:22.945847   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:23.259452   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:23.342758   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:23.445295   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:23.445656   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:23.733531   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:23.759371   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:23.842472   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:23.944967   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:23.945359   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:24.258773   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:24.342971   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:24.445380   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:24.445776   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:24.736551   17296 node_ready.go:49] node "addons-814968" has status "Ready":"True"
	I1009 18:47:24.736575   17296 node_ready.go:38] duration metric: took 17.005838844s for node "addons-814968" to be "Ready" ...
	I1009 18:47:24.736584   17296 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:47:24.744602   17296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dcfpw" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:24.759976   17296 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:47:24.759999   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:24.852175   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:24.948047   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:24.948280   17296 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:47:24.948297   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:25.262018   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:25.425522   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:25.526821   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:25.527989   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:25.760317   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:25.843006   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:25.945560   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:25.946021   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:26.260702   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:26.343790   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:26.447269   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:26.447626   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:26.749365   17296 pod_ready.go:93] pod "coredns-7c65d6cfc9-dcfpw" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.749387   17296 pod_ready.go:82] duration metric: took 2.004695919s for pod "coredns-7c65d6cfc9-dcfpw" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.749413   17296 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.753259   17296 pod_ready.go:93] pod "etcd-addons-814968" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.753281   17296 pod_ready.go:82] duration metric: took 3.859154ms for pod "etcd-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.753296   17296 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.757336   17296 pod_ready.go:93] pod "kube-apiserver-addons-814968" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.757355   17296 pod_ready.go:82] duration metric: took 4.05242ms for pod "kube-apiserver-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.757364   17296 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.760308   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:26.761109   17296 pod_ready.go:93] pod "kube-controller-manager-addons-814968" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.761125   17296 pod_ready.go:82] duration metric: took 3.755076ms for pod "kube-controller-manager-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.761135   17296 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wprfw" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.764780   17296 pod_ready.go:93] pod "kube-proxy-wprfw" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.764798   17296 pod_ready.go:82] duration metric: took 3.657575ms for pod "kube-proxy-wprfw" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.764806   17296 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.860586   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:26.945696   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:26.946004   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:27.148107   17296 pod_ready.go:93] pod "kube-scheduler-addons-814968" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:27.148131   17296 pod_ready.go:82] duration metric: took 383.319465ms for pod "kube-scheduler-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:27.148141   17296 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:27.261353   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:27.349074   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:27.446947   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:27.448175   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:27.837088   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:27.843257   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:27.948109   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:27.949237   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:28.260442   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:28.343002   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:28.446287   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:28.447557   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:28.760728   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:28.843589   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:28.945278   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:28.945780   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:29.154004   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:29.261640   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:29.343520   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:29.446488   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:29.447659   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:29.761146   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:29.843443   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:29.945471   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:29.945727   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:30.260630   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:30.343326   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:30.446651   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:30.447176   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:30.760370   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:30.843163   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:30.946390   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:30.946801   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:31.154792   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:31.260735   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:31.362287   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:31.461879   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:31.462166   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:31.760778   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:31.843460   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:31.945691   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:31.945865   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:32.260035   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:32.342713   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:32.446417   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:32.447321   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:32.761290   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:32.843300   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:32.946078   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:32.946339   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:33.261021   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:33.361077   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:33.445910   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:33.446441   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:33.653549   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:33.760532   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:33.843521   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:33.945537   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:33.945776   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:34.260597   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:34.343297   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:34.446438   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:34.446917   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:34.759735   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:34.843826   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:34.946501   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:34.947147   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:35.260835   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:35.342812   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:35.445887   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:35.446256   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:35.653831   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:35.761213   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:35.843780   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:35.946095   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:35.946746   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:36.260351   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:36.343302   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:36.446352   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:36.446482   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:36.760085   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:36.842997   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:36.946319   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:36.946457   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:37.259858   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:37.344101   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:37.446240   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:37.447073   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:37.654375   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:37.760063   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:37.843545   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:37.945936   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:37.946050   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:38.260595   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:38.343585   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:38.445878   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:38.446335   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:38.760088   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:38.843149   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:38.946467   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:38.946825   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:39.260408   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:39.343449   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:39.445719   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:39.445832   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:39.760735   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:39.861038   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:39.945993   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:39.946914   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:40.153821   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:40.260610   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:40.343670   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:40.445625   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:40.446505   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:40.761228   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:40.843782   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:40.946270   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:40.946782   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:41.264880   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:41.365653   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:41.445704   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:41.445948   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:41.760882   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:41.860846   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:41.946144   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:41.946375   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:42.154251   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:42.260526   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:42.343602   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:42.445414   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:42.445829   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:42.760248   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:42.843960   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:42.945874   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:42.946212   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.261236   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.343760   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:43.445666   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.446004   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.761367   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.843925   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:43.946157   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.946569   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.157719   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:44.260870   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:44.343648   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:44.446315   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.447509   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.760909   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:44.843599   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:44.945648   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.946044   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.260595   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.343824   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:45.445957   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.446192   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.760554   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.844255   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:45.946291   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.946662   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.260281   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.343037   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:46.447042   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.447614   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.653785   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:46.762104   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.862194   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:46.945957   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.946362   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.260329   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:47.343479   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.445447   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.445742   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.814338   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:47.944521   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.945492   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.945889   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.259922   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.342622   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:48.445652   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.446235   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.825496   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.843015   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:48.946092   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.946745   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.154437   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:49.259953   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.343304   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:49.447965   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:49.448020   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.831622   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.844681   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:49.947627   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.948357   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.260040   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.342910   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:50.446330   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:50.446930   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.760696   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.843459   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:50.945891   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.946292   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.155349   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:51.259721   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.343645   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:51.446561   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.447628   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.761132   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.843134   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:51.947089   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.947642   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.260439   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.344004   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:52.445777   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.446366   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.760091   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.842839   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:52.946010   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.946327   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.259798   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.343644   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:53.445722   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.445993   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.653951   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:53.760624   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.843690   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:53.945674   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.946141   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.260486   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.360334   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:54.446126   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.446415   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.760657   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.843572   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:54.945937   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.946347   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.261152   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.342911   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:55.446044   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.446707   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.760208   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.842710   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:55.946017   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.946363   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.154467   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:56.260735   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.343377   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:56.446873   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.447546   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.760307   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.843166   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:56.946231   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.946446   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.260745   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.359936   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:57.446044   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.446896   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.760711   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.844133   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:57.953069   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.953847   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.266805   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:58.268009   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.342958   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:58.445848   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.446207   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.760729   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.843831   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:58.945857   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.946233   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.260845   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.342473   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:59.445563   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:59.445784   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.833764   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.843671   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.025251   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:00.026958   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.331459   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.343401   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.446937   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.447876   17296 kapi.go:107] duration metric: took 48.505645503s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:48:00.655877   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:00.828240   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.843871   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.948670   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.328473   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.344250   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:01.447373   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.826838   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.844012   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:01.946643   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.260901   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.343452   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:02.445427   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.760216   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.842922   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:02.946312   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.154235   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:03.261221   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.343823   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:03.446297   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.760629   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.843655   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:03.945900   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.260415   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.343387   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:04.446433   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.760872   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.843553   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:04.945613   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.260673   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.343279   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:05.445325   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.654683   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:05.760548   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.843082   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:05.946317   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:06.260775   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.343462   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:06.527589   17296 kapi.go:107] duration metric: took 54.585549271s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:48:06.830614   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.843150   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:07.260177   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.343435   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:07.655453   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:07.760402   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.843152   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:08.261859   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.363118   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:08.761072   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.843661   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:09.260481   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.343494   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:09.788497   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.854862   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:10.153622   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:10.260563   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.343307   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:10.761204   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.843348   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:11.260472   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.360877   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:11.760438   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.843463   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:12.153835   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:12.260615   17296 kapi.go:107] duration metric: took 59.004669658s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:48:12.343341   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:12.843416   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:13.343137   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:13.843500   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:14.154877   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:14.343882   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:14.843625   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:15.343544   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:15.843109   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:16.343253   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:16.653107   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:16.843682   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:17.343872   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:17.843693   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:18.342897   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:18.653895   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:18.843279   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:19.343110   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:19.842717   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:20.343898   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:20.654036   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:20.843665   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:21.343105   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:21.843728   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:22.343627   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:22.843131   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.154817   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:23.343306   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.843650   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:24.342878   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:24.843169   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:25.343091   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:25.653774   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:25.843398   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:26.343625   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:26.842860   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:27.343768   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:27.843773   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:28.154240   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:28.343018   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:28.843701   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:29.343688   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:29.844325   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:30.155325   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:30.343039   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:30.844055   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:31.343489   17296 kapi.go:107] duration metric: took 1m14.50354887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:48:31.345687   17296 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-814968 cluster.
	I1009 18:48:31.347380   17296 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:48:31.348888   17296 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:48:31.350701   17296 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1009 18:48:31.352263   17296 addons.go:510] duration metric: took 1m25.438051425s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher metrics-server yakd inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1009 18:48:32.653863   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:35.154466   17296 pod_ready.go:93] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:35.154498   17296 pod_ready.go:82] duration metric: took 1m8.006349266s for pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:35.154511   17296 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7txf4" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:35.159453   17296 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7txf4" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:35.159481   17296 pod_ready.go:82] duration metric: took 4.961783ms for pod "nvidia-device-plugin-daemonset-7txf4" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:35.159507   17296 pod_ready.go:39] duration metric: took 1m10.422897734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:48:35.159528   17296 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:48:35.159565   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:48:35.159630   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:48:35.194557   17296 cri.go:89] found id: "16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:35.194582   17296 cri.go:89] found id: ""
	I1009 18:48:35.194592   17296 logs.go:282] 1 containers: [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c]
	I1009 18:48:35.194645   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.197956   17296 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:48:35.198021   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:48:35.231379   17296 cri.go:89] found id: "1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:35.231399   17296 cri.go:89] found id: ""
	I1009 18:48:35.231408   17296 logs.go:282] 1 containers: [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38]
	I1009 18:48:35.231466   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.234767   17296 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:48:35.234839   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:48:35.269878   17296 cri.go:89] found id: "02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:35.269900   17296 cri.go:89] found id: ""
	I1009 18:48:35.269907   17296 logs.go:282] 1 containers: [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b]
	I1009 18:48:35.269959   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.273465   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:48:35.273534   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:48:35.307585   17296 cri.go:89] found id: "221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:35.307609   17296 cri.go:89] found id: ""
	I1009 18:48:35.307620   17296 logs.go:282] 1 containers: [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915]
	I1009 18:48:35.307671   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.311029   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:48:35.311088   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:48:35.345746   17296 cri.go:89] found id: "2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:35.345769   17296 cri.go:89] found id: ""
	I1009 18:48:35.345777   17296 logs.go:282] 1 containers: [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1]
	I1009 18:48:35.345823   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.349300   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:48:35.349379   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:48:35.383274   17296 cri.go:89] found id: "6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:35.383302   17296 cri.go:89] found id: ""
	I1009 18:48:35.383313   17296 logs.go:282] 1 containers: [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867]
	I1009 18:48:35.383374   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.386711   17296 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:48:35.386773   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:48:35.419254   17296 cri.go:89] found id: "f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:35.419281   17296 cri.go:89] found id: ""
	I1009 18:48:35.419292   17296 logs.go:282] 1 containers: [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c]
	I1009 18:48:35.419349   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.422688   17296 logs.go:123] Gathering logs for kindnet [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c] ...
	I1009 18:48:35.422711   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:35.455388   17296 logs.go:123] Gathering logs for container status ...
	I1009 18:48:35.455414   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:48:35.495741   17296 logs.go:123] Gathering logs for etcd [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38] ...
	I1009 18:48:35.495767   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:35.536322   17296 logs.go:123] Gathering logs for kube-proxy [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1] ...
	I1009 18:48:35.536364   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:35.570198   17296 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:48:35.570224   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:48:35.665335   17296 logs.go:123] Gathering logs for kube-apiserver [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c] ...
	I1009 18:48:35.665365   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:35.709971   17296 logs.go:123] Gathering logs for coredns [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b] ...
	I1009 18:48:35.710008   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:35.744816   17296 logs.go:123] Gathering logs for kube-scheduler [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915] ...
	I1009 18:48:35.744843   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:35.786298   17296 logs.go:123] Gathering logs for kube-controller-manager [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867] ...
	I1009 18:48:35.786339   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:35.843800   17296 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:48:35.843834   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:48:35.916834   17296 logs.go:123] Gathering logs for kubelet ...
	I1009 18:48:35.916878   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:48:35.965387   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:35.965586   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:35.997670   17296 logs.go:123] Gathering logs for dmesg ...
	I1009 18:48:35.997708   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:48:36.010199   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:36.010221   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:48:36.010298   17296 out.go:270] X Problems detected in kubelet:
	W1009 18:48:36.010310   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:36.010317   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:36.010329   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:36.010339   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:48:46.011629   17296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:48:46.024857   17296 api_server.go:72] duration metric: took 1m40.110703672s to wait for apiserver process to appear ...
	I1009 18:48:46.024883   17296 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:48:46.024915   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:48:46.024970   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:48:46.058499   17296 cri.go:89] found id: "16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:46.058520   17296 cri.go:89] found id: ""
	I1009 18:48:46.058527   17296 logs.go:282] 1 containers: [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c]
	I1009 18:48:46.058574   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.061901   17296 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:48:46.061978   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:48:46.094795   17296 cri.go:89] found id: "1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:46.094816   17296 cri.go:89] found id: ""
	I1009 18:48:46.094824   17296 logs.go:282] 1 containers: [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38]
	I1009 18:48:46.094869   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.098067   17296 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:48:46.098128   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:48:46.130361   17296 cri.go:89] found id: "02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:46.130385   17296 cri.go:89] found id: ""
	I1009 18:48:46.130393   17296 logs.go:282] 1 containers: [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b]
	I1009 18:48:46.130438   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.133643   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:48:46.133701   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:48:46.168196   17296 cri.go:89] found id: "221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:46.168219   17296 cri.go:89] found id: ""
	I1009 18:48:46.168227   17296 logs.go:282] 1 containers: [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915]
	I1009 18:48:46.168294   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.171547   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:48:46.171605   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:48:46.205084   17296 cri.go:89] found id: "2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:46.205110   17296 cri.go:89] found id: ""
	I1009 18:48:46.205118   17296 logs.go:282] 1 containers: [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1]
	I1009 18:48:46.205161   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.208419   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:48:46.208484   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:48:46.241599   17296 cri.go:89] found id: "6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:46.241621   17296 cri.go:89] found id: ""
	I1009 18:48:46.241631   17296 logs.go:282] 1 containers: [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867]
	I1009 18:48:46.241685   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.245016   17296 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:48:46.245073   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:48:46.278801   17296 cri.go:89] found id: "f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:46.278821   17296 cri.go:89] found id: ""
	I1009 18:48:46.278829   17296 logs.go:282] 1 containers: [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c]
	I1009 18:48:46.278872   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.282257   17296 logs.go:123] Gathering logs for etcd [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38] ...
	I1009 18:48:46.282285   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:46.322549   17296 logs.go:123] Gathering logs for coredns [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b] ...
	I1009 18:48:46.322587   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:46.357924   17296 logs.go:123] Gathering logs for kube-scheduler [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915] ...
	I1009 18:48:46.357958   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:46.397521   17296 logs.go:123] Gathering logs for kube-controller-manager [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867] ...
	I1009 18:48:46.397555   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:46.457165   17296 logs.go:123] Gathering logs for kindnet [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c] ...
	I1009 18:48:46.457201   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:46.490524   17296 logs.go:123] Gathering logs for kubelet ...
	I1009 18:48:46.490552   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:48:46.535478   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:46.535658   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:46.572731   17296 logs.go:123] Gathering logs for dmesg ...
	I1009 18:48:46.572775   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:48:46.584660   17296 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:48:46.584688   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:48:46.684444   17296 logs.go:123] Gathering logs for container status ...
	I1009 18:48:46.684475   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:48:46.726249   17296 logs.go:123] Gathering logs for kube-apiserver [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c] ...
	I1009 18:48:46.726275   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:46.771681   17296 logs.go:123] Gathering logs for kube-proxy [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1] ...
	I1009 18:48:46.771728   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:46.806520   17296 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:48:46.806561   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:48:46.881346   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:46.881380   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:48:46.881439   17296 out.go:270] X Problems detected in kubelet:
	W1009 18:48:46.881447   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:46.881454   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:46.881460   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:46.881467   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:48:56.881995   17296 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 18:48:56.886467   17296 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 18:48:56.887447   17296 api_server.go:141] control plane version: v1.31.1
	I1009 18:48:56.887474   17296 api_server.go:131] duration metric: took 10.862584003s to wait for apiserver health ...
	I1009 18:48:56.887487   17296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:48:56.887597   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:48:56.887677   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:48:56.921141   17296 cri.go:89] found id: "16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:56.921166   17296 cri.go:89] found id: ""
	I1009 18:48:56.921175   17296 logs.go:282] 1 containers: [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c]
	I1009 18:48:56.921222   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:56.924386   17296 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:48:56.924458   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:48:56.957508   17296 cri.go:89] found id: "1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:56.957532   17296 cri.go:89] found id: ""
	I1009 18:48:56.957540   17296 logs.go:282] 1 containers: [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38]
	I1009 18:48:56.957585   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:56.960906   17296 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:48:56.960966   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:48:56.994274   17296 cri.go:89] found id: "02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:56.994302   17296 cri.go:89] found id: ""
	I1009 18:48:56.994312   17296 logs.go:282] 1 containers: [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b]
	I1009 18:48:56.994370   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:56.998013   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:48:56.998083   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:48:57.031708   17296 cri.go:89] found id: "221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:57.031728   17296 cri.go:89] found id: ""
	I1009 18:48:57.031734   17296 logs.go:282] 1 containers: [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915]
	I1009 18:48:57.031786   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:57.035185   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:48:57.035275   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:48:57.071177   17296 cri.go:89] found id: "2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:57.071223   17296 cri.go:89] found id: ""
	I1009 18:48:57.071234   17296 logs.go:282] 1 containers: [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1]
	I1009 18:48:57.071296   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:57.074708   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:48:57.074773   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:48:57.110767   17296 cri.go:89] found id: "6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:57.110787   17296 cri.go:89] found id: ""
	I1009 18:48:57.110796   17296 logs.go:282] 1 containers: [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867]
	I1009 18:48:57.110851   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:57.114310   17296 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:48:57.114378   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:48:57.152783   17296 cri.go:89] found id: "f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:57.152802   17296 cri.go:89] found id: ""
	I1009 18:48:57.152808   17296 logs.go:282] 1 containers: [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c]
	I1009 18:48:57.152854   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:57.156527   17296 logs.go:123] Gathering logs for kube-controller-manager [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867] ...
	I1009 18:48:57.156549   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:57.211216   17296 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:48:57.211253   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:48:57.288037   17296 logs.go:123] Gathering logs for etcd [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38] ...
	I1009 18:48:57.288078   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:57.330258   17296 logs.go:123] Gathering logs for coredns [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b] ...
	I1009 18:48:57.330290   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:57.369141   17296 logs.go:123] Gathering logs for kube-scheduler [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915] ...
	I1009 18:48:57.369185   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:57.409572   17296 logs.go:123] Gathering logs for kube-proxy [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1] ...
	I1009 18:48:57.409605   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:57.442324   17296 logs.go:123] Gathering logs for kubelet ...
	I1009 18:48:57.442359   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:48:57.487455   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:57.487640   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:57.519372   17296 logs.go:123] Gathering logs for dmesg ...
	I1009 18:48:57.519410   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:48:57.531766   17296 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:48:57.531800   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:48:57.630041   17296 logs.go:123] Gathering logs for kube-apiserver [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c] ...
	I1009 18:48:57.630077   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:57.673703   17296 logs.go:123] Gathering logs for kindnet [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c] ...
	I1009 18:48:57.673733   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:57.708491   17296 logs.go:123] Gathering logs for container status ...
	I1009 18:48:57.708522   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:48:57.749802   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:57.749824   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:48:57.749877   17296 out.go:270] X Problems detected in kubelet:
	W1009 18:48:57.749890   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:57.749901   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:57.749910   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:57.749915   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:49:07.760079   17296 system_pods.go:59] 18 kube-system pods found
	I1009 18:49:07.760124   17296 system_pods.go:61] "coredns-7c65d6cfc9-dcfpw" [ab2ddf3f-03de-4761-947c-d307eb22d417] Running
	I1009 18:49:07.760136   17296 system_pods.go:61] "csi-hostpath-attacher-0" [e272e252-86b0-4468-9131-dca02745720a] Running
	I1009 18:49:07.760141   17296 system_pods.go:61] "csi-hostpath-resizer-0" [bfd004fc-a591-4578-b359-f70ef5724f11] Running
	I1009 18:49:07.760146   17296 system_pods.go:61] "csi-hostpathplugin-fqb8x" [2f8a767d-d27d-4ba0-8919-fdc68455832c] Running
	I1009 18:49:07.760152   17296 system_pods.go:61] "etcd-addons-814968" [5100735e-81ed-4e86-9da0-3f7f79a02d4f] Running
	I1009 18:49:07.760157   17296 system_pods.go:61] "kindnet-mdrqx" [d90881e9-cfe6-4d42-8003-9efb160a7937] Running
	I1009 18:49:07.760162   17296 system_pods.go:61] "kube-apiserver-addons-814968" [315b151b-2aca-4e06-8c8a-e81807aa1638] Running
	I1009 18:49:07.760168   17296 system_pods.go:61] "kube-controller-manager-addons-814968" [0882300f-9693-46ce-a584-9712095a27ed] Running
	I1009 18:49:07.760176   17296 system_pods.go:61] "kube-ingress-dns-minikube" [5fd07203-977b-4e7c-b6db-81030c0af955] Running
	I1009 18:49:07.760183   17296 system_pods.go:61] "kube-proxy-wprfw" [9204c10f-c636-4846-8ee8-46635c3324e2] Running
	I1009 18:49:07.760191   17296 system_pods.go:61] "kube-scheduler-addons-814968" [b4efbf7d-41ce-447a-80d1-6d4fe68f3f0c] Running
	I1009 18:49:07.760197   17296 system_pods.go:61] "metrics-server-84c5f94fbc-5gbfm" [aecf0efb-0d9b-429c-82bb-0aa04751f7f0] Running
	I1009 18:49:07.760204   17296 system_pods.go:61] "nvidia-device-plugin-daemonset-7txf4" [91c3baad-6ee1-4595-bce6-7b2db5cb9cd3] Running
	I1009 18:49:07.760210   17296 system_pods.go:61] "registry-66c9cd494c-s2zbn" [e5e37670-4f6a-48d7-8ec0-96a1df679765] Running
	I1009 18:49:07.760218   17296 system_pods.go:61] "registry-proxy-zpr6p" [1a3e151b-470d-420f-a50b-d42194bf9620] Running
	I1009 18:49:07.760224   17296 system_pods.go:61] "snapshot-controller-56fcc65765-5z6gs" [4ed3dbbb-226e-4b73-bd8b-8bb50514d365] Running
	I1009 18:49:07.760233   17296 system_pods.go:61] "snapshot-controller-56fcc65765-l6fk4" [1f1a2f1f-a768-4156-b406-731c3890ec0f] Running
	I1009 18:49:07.760239   17296 system_pods.go:61] "storage-provisioner" [522ad8d0-bab3-4c94-9914-42a4afc097ba] Running
	I1009 18:49:07.760249   17296 system_pods.go:74] duration metric: took 10.87275449s to wait for pod list to return data ...
	I1009 18:49:07.760261   17296 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:49:07.762809   17296 default_sa.go:45] found service account: "default"
	I1009 18:49:07.762830   17296 default_sa.go:55] duration metric: took 2.560915ms for default service account to be created ...
	I1009 18:49:07.762837   17296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:49:07.771494   17296 system_pods.go:86] 18 kube-system pods found
	I1009 18:49:07.771528   17296 system_pods.go:89] "coredns-7c65d6cfc9-dcfpw" [ab2ddf3f-03de-4761-947c-d307eb22d417] Running
	I1009 18:49:07.771536   17296 system_pods.go:89] "csi-hostpath-attacher-0" [e272e252-86b0-4468-9131-dca02745720a] Running
	I1009 18:49:07.771542   17296 system_pods.go:89] "csi-hostpath-resizer-0" [bfd004fc-a591-4578-b359-f70ef5724f11] Running
	I1009 18:49:07.771547   17296 system_pods.go:89] "csi-hostpathplugin-fqb8x" [2f8a767d-d27d-4ba0-8919-fdc68455832c] Running
	I1009 18:49:07.771552   17296 system_pods.go:89] "etcd-addons-814968" [5100735e-81ed-4e86-9da0-3f7f79a02d4f] Running
	I1009 18:49:07.771558   17296 system_pods.go:89] "kindnet-mdrqx" [d90881e9-cfe6-4d42-8003-9efb160a7937] Running
	I1009 18:49:07.771563   17296 system_pods.go:89] "kube-apiserver-addons-814968" [315b151b-2aca-4e06-8c8a-e81807aa1638] Running
	I1009 18:49:07.771570   17296 system_pods.go:89] "kube-controller-manager-addons-814968" [0882300f-9693-46ce-a584-9712095a27ed] Running
	I1009 18:49:07.771577   17296 system_pods.go:89] "kube-ingress-dns-minikube" [5fd07203-977b-4e7c-b6db-81030c0af955] Running
	I1009 18:49:07.771582   17296 system_pods.go:89] "kube-proxy-wprfw" [9204c10f-c636-4846-8ee8-46635c3324e2] Running
	I1009 18:49:07.771589   17296 system_pods.go:89] "kube-scheduler-addons-814968" [b4efbf7d-41ce-447a-80d1-6d4fe68f3f0c] Running
	I1009 18:49:07.771597   17296 system_pods.go:89] "metrics-server-84c5f94fbc-5gbfm" [aecf0efb-0d9b-429c-82bb-0aa04751f7f0] Running
	I1009 18:49:07.771605   17296 system_pods.go:89] "nvidia-device-plugin-daemonset-7txf4" [91c3baad-6ee1-4595-bce6-7b2db5cb9cd3] Running
	I1009 18:49:07.771611   17296 system_pods.go:89] "registry-66c9cd494c-s2zbn" [e5e37670-4f6a-48d7-8ec0-96a1df679765] Running
	I1009 18:49:07.771617   17296 system_pods.go:89] "registry-proxy-zpr6p" [1a3e151b-470d-420f-a50b-d42194bf9620] Running
	I1009 18:49:07.771623   17296 system_pods.go:89] "snapshot-controller-56fcc65765-5z6gs" [4ed3dbbb-226e-4b73-bd8b-8bb50514d365] Running
	I1009 18:49:07.771629   17296 system_pods.go:89] "snapshot-controller-56fcc65765-l6fk4" [1f1a2f1f-a768-4156-b406-731c3890ec0f] Running
	I1009 18:49:07.771635   17296 system_pods.go:89] "storage-provisioner" [522ad8d0-bab3-4c94-9914-42a4afc097ba] Running
	I1009 18:49:07.771645   17296 system_pods.go:126] duration metric: took 8.802073ms to wait for k8s-apps to be running ...
	I1009 18:49:07.771658   17296 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:49:07.771712   17296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:49:07.783055   17296 system_svc.go:56] duration metric: took 11.385735ms WaitForService to wait for kubelet
	I1009 18:49:07.783080   17296 kubeadm.go:582] duration metric: took 2m1.8689302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:49:07.783098   17296 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:49:07.786198   17296 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1009 18:49:07.786246   17296 node_conditions.go:123] node cpu capacity is 8
	I1009 18:49:07.786260   17296 node_conditions.go:105] duration metric: took 3.157884ms to run NodePressure ...
	I1009 18:49:07.786271   17296 start.go:241] waiting for startup goroutines ...
	I1009 18:49:07.786278   17296 start.go:246] waiting for cluster config update ...
	I1009 18:49:07.786294   17296 start.go:255] writing updated cluster config ...
	I1009 18:49:07.786596   17296 ssh_runner.go:195] Run: rm -f paused
	I1009 18:49:07.837121   17296 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 18:49:07.839371   17296 out.go:177] * Done! kubectl is now configured to use "addons-814968" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 18:59:00 addons-814968 crio[1029]: time="2024-10-09 18:59:00.626092927Z" level=info msg="Removed pod sandbox: 3d64593e0fa6f92995c67386fd6300ad7c0bbb28fa9e15d5844eb8dc69d27fcd" id=f839b9f0-0eaa-40b2-b4b7-65f4e30c2a80 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 18:59:00 addons-814968 crio[1029]: time="2024-10-09 18:59:00.626595236Z" level=info msg="Stopping pod sandbox: 0556adb60d6c9b19ebad5b39851ef39f88574ca0daff7ec49c453488c50afc6e" id=f138ee99-a7ee-49c2-b07b-d28b17701daf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 18:59:00 addons-814968 crio[1029]: time="2024-10-09 18:59:00.626642159Z" level=info msg="Stopped pod sandbox (already stopped): 0556adb60d6c9b19ebad5b39851ef39f88574ca0daff7ec49c453488c50afc6e" id=f138ee99-a7ee-49c2-b07b-d28b17701daf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 18:59:00 addons-814968 crio[1029]: time="2024-10-09 18:59:00.626962214Z" level=info msg="Removing pod sandbox: 0556adb60d6c9b19ebad5b39851ef39f88574ca0daff7ec49c453488c50afc6e" id=efcad44a-e36f-45ce-86e5-a3298c015e72 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 18:59:00 addons-814968 crio[1029]: time="2024-10-09 18:59:00.634554323Z" level=info msg="Removed pod sandbox: 0556adb60d6c9b19ebad5b39851ef39f88574ca0daff7ec49c453488c50afc6e" id=efcad44a-e36f-45ce-86e5-a3298c015e72 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 18:59:13 addons-814968 crio[1029]: time="2024-10-09 18:59:13.337145507Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=553af616-bec9-44ed-ac6b-5242a27baa62 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:13 addons-814968 crio[1029]: time="2024-10-09 18:59:13.337453762Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=553af616-bec9-44ed-ac6b-5242a27baa62 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:28 addons-814968 crio[1029]: time="2024-10-09 18:59:28.337054588Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=48c16d39-407d-4a96-a613-667bf1ae57d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:28 addons-814968 crio[1029]: time="2024-10-09 18:59:28.337343319Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=48c16d39-407d-4a96-a613-667bf1ae57d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:42 addons-814968 crio[1029]: time="2024-10-09 18:59:42.336735302Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5be869c8-2f13-4150-b49c-1ceb2fe11553 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:42 addons-814968 crio[1029]: time="2024-10-09 18:59:42.337003165Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=5be869c8-2f13-4150-b49c-1ceb2fe11553 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:55 addons-814968 crio[1029]: time="2024-10-09 18:59:55.336293314Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a19e3dad-b370-43e8-85cd-9308019bb1b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:55 addons-814968 crio[1029]: time="2024-10-09 18:59:55.336509204Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a19e3dad-b370-43e8-85cd-9308019bb1b4 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:58 addons-814968 crio[1029]: time="2024-10-09 18:59:58.950793811Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-rxzq2/POD" id=692e8869-c56c-4521-be31-ec119d3a3033 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 18:59:58 addons-814968 crio[1029]: time="2024-10-09 18:59:58.950851113Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 09 18:59:58 addons-814968 crio[1029]: time="2024-10-09 18:59:58.972151345Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-rxzq2 Namespace:default ID:67b3815abd821278239237852852018ba235e0c26dbcb3a4a858fdfd7b35b396 UID:c5382932-5f77-4c8c-af72-5140f41eea6d NetNS:/var/run/netns/0d925b5a-b863-487f-993a-aa0e7793959b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 18:59:58 addons-814968 crio[1029]: time="2024-10-09 18:59:58.972236720Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-rxzq2 to CNI network \"kindnet\" (type=ptp)"
	Oct 09 18:59:58 addons-814968 crio[1029]: time="2024-10-09 18:59:58.982036667Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-rxzq2 Namespace:default ID:67b3815abd821278239237852852018ba235e0c26dbcb3a4a858fdfd7b35b396 UID:c5382932-5f77-4c8c-af72-5140f41eea6d NetNS:/var/run/netns/0d925b5a-b863-487f-993a-aa0e7793959b Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Oct 09 18:59:58 addons-814968 crio[1029]: time="2024-10-09 18:59:58.982166672Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-rxzq2 for CNI network kindnet (type=ptp)"
	Oct 09 18:59:58 addons-814968 crio[1029]: time="2024-10-09 18:59:58.985975119Z" level=info msg="Ran pod sandbox 67b3815abd821278239237852852018ba235e0c26dbcb3a4a858fdfd7b35b396 with infra container: default/hello-world-app-55bf9c44b4-rxzq2/POD" id=692e8869-c56c-4521-be31-ec119d3a3033 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 09 18:59:59 addons-814968 crio[1029]: time="2024-10-09 18:59:59.025259058Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=8227f25f-f927-42e4-b262-f4a9dc45f168 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:59 addons-814968 crio[1029]: time="2024-10-09 18:59:59.025550232Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=8227f25f-f927-42e4-b262-f4a9dc45f168 name=/runtime.v1.ImageService/ImageStatus
	Oct 09 18:59:59 addons-814968 crio[1029]: time="2024-10-09 18:59:59.026103778Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=e2fa06f7-33cb-4a3b-ad0c-4ca98460b268 name=/runtime.v1.ImageService/PullImage
	Oct 09 18:59:59 addons-814968 crio[1029]: time="2024-10-09 18:59:59.044867754Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 09 18:59:59 addons-814968 crio[1029]: time="2024-10-09 18:59:59.547943234Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e6c1bd6a1c201       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                              2 minutes ago       Running             nginx                     0                   44f6c5dfda074       nginx
	3388f10d0407b       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             11 minutes ago      Running             controller                0                   c944a5188b1dd       ingress-nginx-controller-bc57996ff-rdhdp
	cf7b6540335cc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              patch                     0                   c23f11210a3a3       ingress-nginx-admission-patch-lk7hx
	586088eebfd16       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   f5cf1b38c4ff6       ingress-nginx-admission-create-snl7p
	55fe7600051fd       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             12 minutes ago      Running             minikube-ingress-dns      0                   5026de9c2694c       kube-ingress-dns-minikube
	67672097bfd6f       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   2d3b55b67f56b       metrics-server-84c5f94fbc-5gbfm
	02903fa33ba6d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   886685be67e39       coredns-7c65d6cfc9-dcfpw
	8caeb8fad85ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   db8294a55800d       storage-provisioner
	f2f6ada66ed91       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387                           12 minutes ago      Running             kindnet-cni               0                   f9793dc7e2762       kindnet-mdrqx
	2ecd337cc588b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             12 minutes ago      Running             kube-proxy                0                   2d910828bcffc       kube-proxy-wprfw
	1fa69ee53f8ff       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   20a300e46e71c       etcd-addons-814968
	221dded81f0de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   58088163aa98e       kube-scheduler-addons-814968
	6851332d0dffc       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   9c018b10e40e0       kube-controller-manager-addons-814968
	16933cbf0d802       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   9076ebf2b8037       kube-apiserver-addons-814968
	
	
	==> coredns [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b] <==
	[INFO] 10.244.0.17:45337 - 37987 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000120483s
	[INFO] 10.244.0.17:58311 - 45906 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005183511s
	[INFO] 10.244.0.17:58311 - 45583 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005646303s
	[INFO] 10.244.0.17:48527 - 13835 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006055319s
	[INFO] 10.244.0.17:48527 - 13576 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.007224397s
	[INFO] 10.244.0.17:44115 - 1185 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004895426s
	[INFO] 10.244.0.17:44115 - 919 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006009546s
	[INFO] 10.244.0.17:42190 - 18508 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103773s
	[INFO] 10.244.0.17:42190 - 18279 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014128s
	[INFO] 10.244.0.20:51664 - 163 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000218213s
	[INFO] 10.244.0.20:45534 - 9229 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000162777s
	[INFO] 10.244.0.20:42758 - 32478 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000135684s
	[INFO] 10.244.0.20:37263 - 46037 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149484s
	[INFO] 10.244.0.20:58573 - 24706 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000134282s
	[INFO] 10.244.0.20:57447 - 35259 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119142s
	[INFO] 10.244.0.20:36714 - 47708 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007176388s
	[INFO] 10.244.0.20:43426 - 20498 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007246464s
	[INFO] 10.244.0.20:42972 - 29235 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.009041746s
	[INFO] 10.244.0.20:43872 - 18868 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.00910552s
	[INFO] 10.244.0.20:37816 - 29283 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005540628s
	[INFO] 10.244.0.20:36098 - 22994 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007554549s
	[INFO] 10.244.0.20:58098 - 57248 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000889327s
	[INFO] 10.244.0.20:55334 - 43134 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000985195s
	[INFO] 10.244.0.23:37926 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000277419s
	[INFO] 10.244.0.23:57436 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000214168s
	
	
	==> describe nodes <==
	Name:               addons-814968
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-814968
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=addons-814968
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T18_47_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-814968
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 18:46:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-814968
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 18:59:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 18:58:34 +0000   Wed, 09 Oct 2024 18:46:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 18:58:34 +0000   Wed, 09 Oct 2024 18:46:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 18:58:34 +0000   Wed, 09 Oct 2024 18:46:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 18:58:34 +0000   Wed, 09 Oct 2024 18:47:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-814968
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 665ec1e43df44148875bede2afed5690
	  System UUID:                af1ce627-aaca-4c57-a0b5-20a11a6bd390
	  Boot ID:                    5492573a-87f0-4d18-a115-1fca0501652a
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-world-app-55bf9c44b4-rxzq2            0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-rdhdp    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-dcfpw                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-addons-814968                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-mdrqx                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-addons-814968                250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-814968       200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-wprfw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-814968                100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-5gbfm             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 12m   kube-proxy       
	  Normal   Starting                 13m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m   kubelet          Node addons-814968 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m   kubelet          Node addons-814968 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m   kubelet          Node addons-814968 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-814968 event: Registered Node addons-814968 in Controller
	  Normal   NodeReady                12m   kubelet          Node addons-814968 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000613] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000629] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000641] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000684] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000634] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.615432] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065032] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.027177] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.993095] kauditd_printk_skb: 44 callbacks suppressed
	[Oct 9 18:57] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[  +1.019618] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[  +2.019718] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[  +4.091682] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[Oct 9 18:58] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[ +16.122612] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[ +34.045062] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	
	
	==> etcd [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38] <==
	{"level":"info","ts":"2024-10-09T18:46:57.139647Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-10-09T18:46:57.140390Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-10-09T18:46:57.140811Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2024-10-09T18:47:07.545055Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.679118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-addons-814968\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2024-10-09T18:47:07.545296Z","caller":"traceutil/trace.go:171","msg":"trace[254952150] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-addons-814968; range_end:; response_count:1; response_revision:354; }","duration":"100.926406ms","start":"2024-10-09T18:47:07.444349Z","end":"2024-10-09T18:47:07.545275Z","steps":["trace[254952150] 'agreement among raft nodes before linearized reading'  (duration: 100.575133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:47:07.631074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.939711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:47:07.631162Z","caller":"traceutil/trace.go:171","msg":"trace[1315990972] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:354; }","duration":"187.028388ms","start":"2024-10-09T18:47:07.444101Z","end":"2024-10-09T18:47:07.631130Z","steps":["trace[1315990972] 'agreement among raft nodes before linearized reading'  (duration: 99.819321ms)","trace[1315990972] 'range keys from in-memory index tree'  (duration: 87.095682ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T18:47:08.149189Z","caller":"traceutil/trace.go:171","msg":"trace[36121968] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"115.963151ms","start":"2024-10-09T18:47:08.033209Z","end":"2024-10-09T18:47:08.149172Z","steps":["trace[36121968] 'process raft request'  (duration: 115.292249ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225266Z","caller":"traceutil/trace.go:171","msg":"trace[1679131097] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"101.228697ms","start":"2024-10-09T18:47:08.124022Z","end":"2024-10-09T18:47:08.225250Z","steps":["trace[1679131097] 'process raft request'  (duration: 101.19435ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225544Z","caller":"traceutil/trace.go:171","msg":"trace[1385913193] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"183.311089ms","start":"2024-10-09T18:47:08.042224Z","end":"2024-10-09T18:47:08.225535Z","steps":["trace[1385913193] 'process raft request'  (duration: 182.814039ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225700Z","caller":"traceutil/trace.go:171","msg":"trace[1097339318] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"183.219653ms","start":"2024-10-09T18:47:08.042474Z","end":"2024-10-09T18:47:08.225694Z","steps":["trace[1097339318] 'process raft request'  (duration: 182.651213ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225768Z","caller":"traceutil/trace.go:171","msg":"trace[1121631799] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"183.22097ms","start":"2024-10-09T18:47:08.042542Z","end":"2024-10-09T18:47:08.225763Z","steps":["trace[1121631799] 'process raft request'  (duration: 182.623437ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225867Z","caller":"traceutil/trace.go:171","msg":"trace[751389749] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"183.116259ms","start":"2024-10-09T18:47:08.042744Z","end":"2024-10-09T18:47:08.225860Z","steps":["trace[751389749] 'process raft request'  (duration: 182.445575ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:47:09.035003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.512009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:47:09.035261Z","caller":"traceutil/trace.go:171","msg":"trace[982658497] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:387; }","duration":"102.77967ms","start":"2024-10-09T18:47:08.932467Z","end":"2024-10-09T18:47:09.035247Z","steps":["trace[982658497] 'agreement among raft nodes before linearized reading'  (duration: 102.496738ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:11.844374Z","caller":"traceutil/trace.go:171","msg":"trace[200416362] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"104.878761ms","start":"2024-10-09T18:47:11.739474Z","end":"2024-10-09T18:47:11.844353Z","steps":["trace[200416362] 'process raft request'  (duration: 104.520364ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:47.942604Z","caller":"traceutil/trace.go:171","msg":"trace[1471584267] linearizableReadLoop","detail":"{readStateIndex:1005; appliedIndex:1004; }","duration":"101.305859ms","start":"2024-10-09T18:47:47.841276Z","end":"2024-10-09T18:47:47.942582Z","steps":["trace[1471584267] 'read index received'  (duration: 37.289331ms)","trace[1471584267] 'applied index is now lower than readState.Index'  (duration: 64.015916ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T18:47:47.942691Z","caller":"traceutil/trace.go:171","msg":"trace[1349289586] transaction","detail":"{read_only:false; response_revision:978; number_of_response:1; }","duration":"105.013231ms","start":"2024-10-09T18:47:47.837656Z","end":"2024-10-09T18:47:47.942669Z","steps":["trace[1349289586] 'process raft request'  (duration: 40.936293ms)","trace[1349289586] 'compare'  (duration: 63.914511ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T18:47:47.942715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.416281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:47:47.942743Z","caller":"traceutil/trace.go:171","msg":"trace[2022166272] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:978; }","duration":"101.466725ms","start":"2024-10-09T18:47:47.841269Z","end":"2024-10-09T18:47:47.942736Z","steps":["trace[2022166272] 'agreement among raft nodes before linearized reading'  (duration: 101.391084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:47:58.265012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.891286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-814968\" ","response":"range_response_count:1 size:6238"}
	{"level":"info","ts":"2024-10-09T18:47:58.265079Z","caller":"traceutil/trace.go:171","msg":"trace[1066582241] range","detail":"{range_begin:/registry/minions/addons-814968; range_end:; response_count:1; response_revision:1038; }","duration":"110.969136ms","start":"2024-10-09T18:47:58.154097Z","end":"2024-10-09T18:47:58.265066Z","steps":["trace[1066582241] 'range keys from in-memory index tree'  (duration: 110.738887ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:56:57.155879Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-10-09T18:56:57.179084Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"22.769077ms","hash":3916989673,"current-db-size-bytes":6021120,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3117056,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-10-09T18:56:57.179141Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3916989673,"revision":1514,"compact-revision":-1}
	
	
	==> kernel <==
	 19:00:00 up 42 min,  0 users,  load average: 0.49, 0.74, 0.54
	Linux addons-814968 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c] <==
	I1009 18:57:54.523831       1 main.go:300] handling current node
	I1009 18:58:04.524600       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:58:04.524637       1 main.go:300] handling current node
	I1009 18:58:14.523932       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:58:14.523978       1 main.go:300] handling current node
	I1009 18:58:24.524369       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:58:24.524416       1 main.go:300] handling current node
	I1009 18:58:34.524097       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:58:34.524138       1 main.go:300] handling current node
	I1009 18:58:44.524580       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:58:44.524640       1 main.go:300] handling current node
	I1009 18:58:54.531294       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:58:54.531328       1 main.go:300] handling current node
	I1009 18:59:04.524495       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:59:04.524545       1 main.go:300] handling current node
	I1009 18:59:14.524060       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:59:14.524093       1 main.go:300] handling current node
	I1009 18:59:24.531275       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:59:24.531308       1 main.go:300] handling current node
	I1009 18:59:34.531292       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:59:34.531328       1 main.go:300] handling current node
	I1009 18:59:44.527288       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:59:44.527321       1 main.go:300] handling current node
	I1009 18:59:54.531292       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 18:59:54.531333       1 main.go:300] handling current node
	
	
	==> kube-apiserver [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1009 18:48:34.966248       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.125:443: connect: connection refused" logger="UnhandledError"
	E1009 18:48:34.967635       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.125:443: connect: connection refused" logger="UnhandledError"
	I1009 18:48:35.001203       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1009 18:57:19.958662       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.55.9"}
	I1009 18:57:37.053068       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 18:57:37.230770       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.217.65"}
	I1009 18:57:39.760775       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1009 18:57:40.831110       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1009 18:57:50.618911       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 18:58:10.205575       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.205734       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:58:10.219122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.219341       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:58:10.219439       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.232676       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.232824       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:58:10.242429       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.242751       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 18:58:11.223635       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 18:58:11.243020       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 18:58:11.251642       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1009 18:58:29.000407       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1009 18:59:58.847826       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.224.125"}
	
	
	==> kube-controller-manager [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867] <==
	W1009 18:58:32.187355       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:58:32.187400       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 18:58:34.342042       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-814968"
	I1009 18:58:35.331538       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1009 18:58:35.331570       1 shared_informer.go:320] Caches are synced for resource quota
	I1009 18:58:35.744198       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1009 18:58:35.744250       1 shared_informer.go:320] Caches are synced for garbage collector
	W1009 18:58:49.410362       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:58:49.410403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 18:58:50.046006       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:58:50.046056       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 18:58:51.426146       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:58:51.426187       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 18:59:00.993206       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W1009 18:59:10.687526       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:59:10.687568       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 18:59:25.014246       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:59:25.014291       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 18:59:28.991488       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:59:28.991533       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 18:59:35.926298       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 18:59:35.926342       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 18:59:58.649137       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="11.639998ms"
	I1009 18:59:58.658521       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.335284ms"
	I1009 18:59:58.658606       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="49.1µs"
	
	
	==> kube-proxy [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1] <==
	I1009 18:47:09.633707       1 server_linux.go:66] "Using iptables proxy"
	I1009 18:47:10.538221       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1009 18:47:10.538329       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:47:10.729409       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:47:10.729496       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:47:10.733444       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:47:10.734161       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:47:10.734191       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:47:10.736403       1 config.go:199] "Starting service config controller"
	I1009 18:47:10.736502       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:47:10.736578       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:47:10.740861       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:47:10.736628       1 config.go:328] "Starting node config controller"
	I1009 18:47:10.740890       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:47:10.930430       1 shared_informer.go:320] Caches are synced for node config
	I1009 18:47:10.930543       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:47:10.930482       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915] <==
	E1009 18:46:58.244832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1009 18:46:58.244829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.089011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 18:46:59.089059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.098410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 18:46:59.098445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.138894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 18:46:59.138936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.174617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 18:46:59.174662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.180011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 18:46:59.180023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 18:46:59.180051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1009 18:46:59.180051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.245248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 18:46:59.245287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.290527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 18:46:59.290579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.372229       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 18:46:59.372274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.389743       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 18:46:59.389793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.399276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 18:46:59.399320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1009 18:46:59.641759       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 18:59:10 addons-814968 kubelet[1623]: E1009 18:59:10.588161    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500350587882921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:10 addons-814968 kubelet[1623]: E1009 18:59:10.588195    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500350587882921,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:13 addons-814968 kubelet[1623]: I1009 18:59:13.336577    1623 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:59:13 addons-814968 kubelet[1623]: E1009 18:59:13.337732    1623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="2d22f598-c2e1-4a30-bd26-0f9952ed8024"
	Oct 09 18:59:20 addons-814968 kubelet[1623]: E1009 18:59:20.590958    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500360590720728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:20 addons-814968 kubelet[1623]: E1009 18:59:20.590997    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500360590720728,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:28 addons-814968 kubelet[1623]: I1009 18:59:28.336458    1623 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:59:28 addons-814968 kubelet[1623]: E1009 18:59:28.337558    1623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="2d22f598-c2e1-4a30-bd26-0f9952ed8024"
	Oct 09 18:59:30 addons-814968 kubelet[1623]: E1009 18:59:30.593436    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500370593165380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:30 addons-814968 kubelet[1623]: E1009 18:59:30.593469    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500370593165380,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:40 addons-814968 kubelet[1623]: E1009 18:59:40.595328    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500380595070125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:40 addons-814968 kubelet[1623]: E1009 18:59:40.595367    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500380595070125,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:42 addons-814968 kubelet[1623]: I1009 18:59:42.336158    1623 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:59:42 addons-814968 kubelet[1623]: E1009 18:59:42.337189    1623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="2d22f598-c2e1-4a30-bd26-0f9952ed8024"
	Oct 09 18:59:50 addons-814968 kubelet[1623]: E1009 18:59:50.598029    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500390597809167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:50 addons-814968 kubelet[1623]: E1009 18:59:50.598063    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500390597809167,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:586484,},InodesUsed:&UInt64Value{Value:225,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 18:59:55 addons-814968 kubelet[1623]: I1009 18:59:55.335751    1623 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 18:59:55 addons-814968 kubelet[1623]: E1009 18:59:55.336766    1623 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="2d22f598-c2e1-4a30-bd26-0f9952ed8024"
	Oct 09 18:59:58 addons-814968 kubelet[1623]: E1009 18:59:58.648820    1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cab27d04-d005-4287-934f-decae3e018d4" containerName="local-path-provisioner"
	Oct 09 18:59:58 addons-814968 kubelet[1623]: E1009 18:59:58.648868    1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e647b19e-2eea-4da4-8d02-50bd3ea1eea4" containerName="cloud-spanner-emulator"
	Oct 09 18:59:58 addons-814968 kubelet[1623]: E1009 18:59:58.648878    1623 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c59d160e-284d-4d33-aa5d-f5cddd7438e0" containerName="helper-pod"
	Oct 09 18:59:58 addons-814968 kubelet[1623]: I1009 18:59:58.648928    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="c59d160e-284d-4d33-aa5d-f5cddd7438e0" containerName="helper-pod"
	Oct 09 18:59:58 addons-814968 kubelet[1623]: I1009 18:59:58.648940    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="e647b19e-2eea-4da4-8d02-50bd3ea1eea4" containerName="cloud-spanner-emulator"
	Oct 09 18:59:58 addons-814968 kubelet[1623]: I1009 18:59:58.648947    1623 memory_manager.go:354] "RemoveStaleState removing state" podUID="cab27d04-d005-4287-934f-decae3e018d4" containerName="local-path-provisioner"
	Oct 09 18:59:58 addons-814968 kubelet[1623]: I1009 18:59:58.747885    1623 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn4s2\" (UniqueName: \"kubernetes.io/projected/c5382932-5f77-4c8c-af72-5140f41eea6d-kube-api-access-wn4s2\") pod \"hello-world-app-55bf9c44b4-rxzq2\" (UID: \"c5382932-5f77-4c8c-af72-5140f41eea6d\") " pod="default/hello-world-app-55bf9c44b4-rxzq2"
	
	
	==> storage-provisioner [8caeb8fad85ec95c5166c64c88db374ab53bae2b4b1c9d62f3e98a0c1445a981] <==
	I1009 18:47:25.571754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:47:25.580483       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:47:25.580515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:47:25.631459       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:47:25.631589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ac3e4d7-32fd-45bb-9f1c-61752b666082", APIVersion:"v1", ResourceVersion:"875", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-814968_ab2696c8-e483-426a-9a4f-d5167d195767 became leader
	I1009 18:47:25.631700       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-814968_ab2696c8-e483-426a-9a4f-d5167d195767!
	I1009 18:47:25.732640       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-814968_ab2696c8-e483-426a-9a4f-d5167d195767!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-814968 -n addons-814968
helpers_test.go:261: (dbg) Run:  kubectl --context addons-814968 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox hello-world-app-55bf9c44b4-rxzq2 ingress-nginx-admission-create-snl7p ingress-nginx-admission-patch-lk7hx
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-814968 describe pod busybox hello-world-app-55bf9c44b4-rxzq2 ingress-nginx-admission-create-snl7p ingress-nginx-admission-patch-lk7hx
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-814968 describe pod busybox hello-world-app-55bf9c44b4-rxzq2 ingress-nginx-admission-create-snl7p ingress-nginx-admission-patch-lk7hx: exit status 1 (81.367661ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-814968/192.168.49.2
	Start Time:       Wed, 09 Oct 2024 18:49:08 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kxsk9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kxsk9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason                           Age                   From               Message
	  ----     ------                           ----                  ----               -------
	  Normal   Scheduled                        10m                   default-scheduler  Successfully assigned default/busybox to addons-814968
	  Normal   Pulling                          9m17s (x4 over 10m)   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed                           9m17s (x4 over 10m)   kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed                           9m17s (x4 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed                           9m4s (x6 over 10m)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff                          5m45s (x21 over 10m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  FailedToRetrieveImagePullSecret  48s (x10 over 2m49s)  kubelet            Unable to retrieve some image pull secrets (gcp-auth); attempting to pull the image may not succeed.
	
	
	Name:             hello-world-app-55bf9c44b4-rxzq2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-814968/192.168.49.2
	Start Time:       Wed, 09 Oct 2024 18:59:58 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wn4s2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wn4s2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-rxzq2 to addons-814968
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.608s (1.608s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-snl7p" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-lk7hx" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-814968 describe pod busybox hello-world-app-55bf9c44b4-rxzq2 ingress-nginx-admission-create-snl7p ingress-nginx-admission-patch-lk7hx: exit status 1
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 addons disable ingress-dns --alsologtostderr -v=1: (1.512004067s)
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable ingress --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 addons disable ingress --alsologtostderr -v=1: (7.636396137s)
--- FAIL: TestAddons/parallel/Ingress (153.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (292.51s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 2.639095ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-5gbfm" [aecf0efb-0d9b-429c-82bb-0aa04751f7f0] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003337102s
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (109.362084ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 10m18.361937944s

                                                
                                                
** /stderr **
I1009 18:57:24.364044   15983 retry.go:31] will retry after 2.243644724s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (67.143536ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 10m20.67408266s

                                                
                                                
** /stderr **
I1009 18:57:26.676121   15983 retry.go:31] will retry after 4.482551462s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (74.853407ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 10m25.23226578s

                                                
                                                
** /stderr **
I1009 18:57:31.234097   15983 retry.go:31] will retry after 5.74528665s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (73.702702ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 10m31.052059418s

                                                
                                                
** /stderr **
I1009 18:57:37.054064   15983 retry.go:31] will retry after 9.625459744s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (65.609197ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 10m40.743809587s

                                                
                                                
** /stderr **
I1009 18:57:46.745895   15983 retry.go:31] will retry after 10.606753112s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (69.561054ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 10m51.420761532s

                                                
                                                
** /stderr **
I1009 18:57:57.422623   15983 retry.go:31] will retry after 33.353909654s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (65.790137ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 11m24.842174182s

                                                
                                                
** /stderr **
I1009 18:58:30.844555   15983 retry.go:31] will retry after 26.93199925s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (64.621377ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 11m51.838730214s

                                                
                                                
** /stderr **
I1009 18:58:57.841528   15983 retry.go:31] will retry after 39.423658821s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (62.78649ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 12m31.332173205s

                                                
                                                
** /stderr **
I1009 18:59:37.334214   15983 retry.go:31] will retry after 34.953148591s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (62.342986ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 13m6.348773671s

                                                
                                                
** /stderr **
I1009 19:00:12.350867   15983 retry.go:31] will retry after 58.255614387s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (62.698468ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 14m4.668004177s

                                                
                                                
** /stderr **
I1009 19:01:10.670188   15983 retry.go:31] will retry after 58.540981679s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-814968 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-814968 top pods -n kube-system: exit status 1 (63.894199ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-dcfpw, age: 15m3.273480118s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-814968
helpers_test.go:235: (dbg) docker inspect addons-814968:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057",
	        "Created": "2024-10-09T18:46:47.904681606Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 18039,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-10-09T18:46:48.046279093Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3a8635a679ec007165247a79bf5f156508ffd34b58bfc31cc163a0cc0634bac6",
	        "ResolvConfPath": "/var/lib/docker/containers/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/hosts",
	        "LogPath": "/var/lib/docker/containers/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057-json.log",
	        "Name": "/addons-814968",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-814968:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-814968",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/437c8fface47263a5556077120b346b810bd07153f3033e0099cd9d246f528f9-init/diff:/var/lib/docker/overlay2/c60c6c9d5a0badaa1d73d2edf39e8bd73e404c1e1194546fbfceed54f9130ada/diff",
	                "MergedDir": "/var/lib/docker/overlay2/437c8fface47263a5556077120b346b810bd07153f3033e0099cd9d246f528f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/437c8fface47263a5556077120b346b810bd07153f3033e0099cd9d246f528f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/437c8fface47263a5556077120b346b810bd07153f3033e0099cd9d246f528f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-814968",
	                "Source": "/var/lib/docker/volumes/addons-814968/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-814968",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-814968",
	                "name.minikube.sigs.k8s.io": "addons-814968",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b7ff64ae22a4804532e10bc0b1f204bad0baf0d0d2da3318217819eef34e7326",
	            "SandboxKey": "/var/run/docker/netns/b7ff64ae22a4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-814968": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a9d38e26e32c6bce52cce30e4e79870e59f7e727468425e0a248b942225086a9",
	                    "EndpointID": "86fdf39292e2f1b44a7ea27da7b8e11a77e1e4da020b31b28866ba4d4feae27c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-814968",
	                        "1cffd86fbfa3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-814968 -n addons-814968
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 logs -n 25: (1.105177463s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-242838 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | download-docker-242838                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-242838                                                                   | download-docker-242838 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-233255   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | binary-mirror-233255                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45383                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-233255                                                                     | binary-mirror-233255   | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| addons  | disable dashboard -p                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-814968                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | addons-814968                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-814968 --wait=true                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:49 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:49 UTC | 09 Oct 24 18:49 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | -p addons-814968                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-814968 ip                                                                            | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-814968 ssh curl -s                                                                   | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:57 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:57 UTC | 09 Oct 24 18:58 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-814968 ssh cat                                                                       | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | /opt/local-path-provisioner/pvc-0ee2d6e6-4e3a-44c5-8adf-db1e9e8041de_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-814968 addons                                                                        | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:58 UTC | 09 Oct 24 18:58 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-814968 ip                                                                            | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 18:59 UTC | 09 Oct 24 18:59 UTC |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 19:00 UTC | 09 Oct 24 19:00 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-814968 addons disable                                                                | addons-814968          | jenkins | v1.34.0 | 09 Oct 24 19:00 UTC | 09 Oct 24 19:00 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:23.630639   17296 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:23.630737   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:23.630742   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:23.630747   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:23.630900   17296 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 18:46:23.631510   17296 out.go:352] Setting JSON to false
	I1009 18:46:23.632319   17296 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1735,"bootTime":1728497849,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:46:23.632413   17296 start.go:139] virtualization: kvm guest
	I1009 18:46:23.634356   17296 out.go:177] * [addons-814968] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 18:46:23.635505   17296 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 18:46:23.635545   17296 notify.go:220] Checking for updates...
	I1009 18:46:23.637620   17296 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:23.638770   17296 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 18:46:23.639791   17296 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	I1009 18:46:23.640874   17296 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 18:46:23.642015   17296 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 18:46:23.643369   17296 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:23.666278   17296 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:23.666356   17296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:23.709451   17296 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-09 18:46:23.700447089 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:46:23.709552   17296 docker.go:318] overlay module found
	I1009 18:46:23.711274   17296 out.go:177] * Using the docker driver based on user configuration
	I1009 18:46:23.712309   17296 start.go:297] selected driver: docker
	I1009 18:46:23.712327   17296 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:23.712338   17296 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 18:46:23.713152   17296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:23.761848   17296 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-10-09 18:46:23.753565989 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:46:23.761997   17296 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:23.762234   17296 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:46:23.763755   17296 out.go:177] * Using Docker driver with root privileges
	I1009 18:46:23.764719   17296 cni.go:84] Creating CNI manager for ""
	I1009 18:46:23.764776   17296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:23.764785   17296 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:23.764850   17296 start.go:340] cluster config:
	{Name:addons-814968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:23.765955   17296 out.go:177] * Starting "addons-814968" primary control-plane node in "addons-814968" cluster
	I1009 18:46:23.766857   17296 cache.go:121] Beginning downloading kic base image for docker with crio
	I1009 18:46:23.768003   17296 out.go:177] * Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:23.769160   17296 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:23.769185   17296 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:23.769210   17296 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I1009 18:46:23.769235   17296 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:23.769351   17296 preload.go:172] Found /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1009 18:46:23.769367   17296 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I1009 18:46:23.769788   17296 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/config.json ...
	I1009 18:46:23.769830   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/config.json: {Name:mkfbea350396646be2581c2f722a4c2a0580f2d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:23.784895   17296 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:23.785021   17296 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:23.785041   17296 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory, skipping pull
	I1009 18:46:23.785049   17296 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec exists in cache, skipping pull
	I1009 18:46:23.785056   17296 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1009 18:46:23.785063   17296 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from local cache
	I1009 18:46:35.553259   17296 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec from cached tarball
	I1009 18:46:35.553297   17296 cache.go:194] Successfully downloaded all kic artifacts
	I1009 18:46:35.553331   17296 start.go:360] acquireMachinesLock for addons-814968: {Name:mk93a1915d4c29d52bf51bdf1943947d947876d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1009 18:46:35.553427   17296 start.go:364] duration metric: took 77.389µs to acquireMachinesLock for "addons-814968"
	I1009 18:46:35.553454   17296 start.go:93] Provisioning new machine with config: &{Name:addons-814968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:46:35.553540   17296 start.go:125] createHost starting for "" (driver="docker")
	I1009 18:46:35.555171   17296 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1009 18:46:35.555380   17296 start.go:159] libmachine.API.Create for "addons-814968" (driver="docker")
	I1009 18:46:35.555416   17296 client.go:168] LocalClient.Create starting
	I1009 18:46:35.555489   17296 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem
	I1009 18:46:35.811322   17296 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/cert.pem
	I1009 18:46:36.053584   17296 cli_runner.go:164] Run: docker network inspect addons-814968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1009 18:46:36.069217   17296 cli_runner.go:211] docker network inspect addons-814968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1009 18:46:36.069293   17296 network_create.go:284] running [docker network inspect addons-814968] to gather additional debugging logs...
	I1009 18:46:36.069313   17296 cli_runner.go:164] Run: docker network inspect addons-814968
	W1009 18:46:36.084931   17296 cli_runner.go:211] docker network inspect addons-814968 returned with exit code 1
	I1009 18:46:36.084959   17296 network_create.go:287] error running [docker network inspect addons-814968]: docker network inspect addons-814968: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-814968 not found
	I1009 18:46:36.084971   17296 network_create.go:289] output of [docker network inspect addons-814968]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-814968 not found
	
	** /stderr **
	I1009 18:46:36.085053   17296 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:46:36.100627   17296 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c7aaf0}
	I1009 18:46:36.100670   17296 network_create.go:124] attempt to create docker network addons-814968 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1009 18:46:36.100709   17296 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-814968 addons-814968
	I1009 18:46:36.160976   17296 network_create.go:108] docker network addons-814968 192.168.49.0/24 created
	I1009 18:46:36.161007   17296 kic.go:121] calculated static IP "192.168.49.2" for the "addons-814968" container
	I1009 18:46:36.161059   17296 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1009 18:46:36.175893   17296 cli_runner.go:164] Run: docker volume create addons-814968 --label name.minikube.sigs.k8s.io=addons-814968 --label created_by.minikube.sigs.k8s.io=true
	I1009 18:46:36.193284   17296 oci.go:103] Successfully created a docker volume addons-814968
	I1009 18:46:36.193352   17296 cli_runner.go:164] Run: docker run --rm --name addons-814968-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-814968 --entrypoint /usr/bin/test -v addons-814968:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib
	I1009 18:46:43.450541   17296 cli_runner.go:217] Completed: docker run --rm --name addons-814968-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-814968 --entrypoint /usr/bin/test -v addons-814968:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -d /var/lib: (7.257148951s)
	I1009 18:46:43.450572   17296 oci.go:107] Successfully prepared a docker volume addons-814968
	I1009 18:46:43.450609   17296 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:43.450633   17296 kic.go:194] Starting extracting preloaded images to volume ...
	I1009 18:46:43.450689   17296 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-814968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir
	I1009 18:46:47.842712   17296 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-814968:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec -I lz4 -xf /preloaded.tar -C /extractDir: (4.391983265s)
	I1009 18:46:47.842742   17296 kic.go:203] duration metric: took 4.392107004s to extract preloaded images to volume ...
	W1009 18:46:47.842858   17296 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1009 18:46:47.842946   17296 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1009 18:46:47.888993   17296 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-814968 --name addons-814968 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-814968 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-814968 --network addons-814968 --ip 192.168.49.2 --volume addons-814968:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec
	I1009 18:46:48.197868   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Running}}
	I1009 18:46:48.214902   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:46:48.235537   17296 cli_runner.go:164] Run: docker exec addons-814968 stat /var/lib/dpkg/alternatives/iptables
	I1009 18:46:48.278675   17296 oci.go:144] the created container "addons-814968" has a running status.
	I1009 18:46:48.278701   17296 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa...
	I1009 18:46:48.435493   17296 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1009 18:46:48.456834   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:46:48.476231   17296 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1009 18:46:48.476269   17296 kic_runner.go:114] Args: [docker exec --privileged addons-814968 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1009 18:46:48.533390   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:46:48.559834   17296 machine.go:93] provisionDockerMachine start ...
	I1009 18:46:48.559926   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:48.580989   17296 main.go:141] libmachine: Using SSH client type: native
	I1009 18:46:48.581181   17296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:46:48.581192   17296 main.go:141] libmachine: About to run SSH command:
	hostname
	I1009 18:46:48.826574   17296 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-814968
	
	I1009 18:46:48.826606   17296 ubuntu.go:169] provisioning hostname "addons-814968"
	I1009 18:46:48.826681   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:48.845342   17296 main.go:141] libmachine: Using SSH client type: native
	I1009 18:46:48.845506   17296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:46:48.845521   17296 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-814968 && echo "addons-814968" | sudo tee /etc/hostname
	I1009 18:46:48.997938   17296 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-814968
	
	I1009 18:46:48.998016   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.016092   17296 main.go:141] libmachine: Using SSH client type: native
	I1009 18:46:49.016264   17296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:46:49.016280   17296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-814968' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-814968/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-814968' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1009 18:46:49.151727   17296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1009 18:46:49.151754   17296 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19780-9209/.minikube CaCertPath:/home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19780-9209/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19780-9209/.minikube}
	I1009 18:46:49.151775   17296 ubuntu.go:177] setting up certificates
	I1009 18:46:49.151788   17296 provision.go:84] configureAuth start
	I1009 18:46:49.151844   17296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-814968
	I1009 18:46:49.170541   17296 provision.go:143] copyHostCerts
	I1009 18:46:49.170625   17296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19780-9209/.minikube/ca.pem (1078 bytes)
	I1009 18:46:49.170734   17296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19780-9209/.minikube/cert.pem (1123 bytes)
	I1009 18:46:49.170791   17296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19780-9209/.minikube/key.pem (1675 bytes)
	I1009 18:46:49.170839   17296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19780-9209/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca-key.pem org=jenkins.addons-814968 san=[127.0.0.1 192.168.49.2 addons-814968 localhost minikube]
	I1009 18:46:49.293598   17296 provision.go:177] copyRemoteCerts
	I1009 18:46:49.293661   17296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1009 18:46:49.293697   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.311068   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:49.412073   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1009 18:46:49.434720   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1009 18:46:49.457651   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1009 18:46:49.479853   17296 provision.go:87] duration metric: took 328.05344ms to configureAuth
	I1009 18:46:49.479879   17296 ubuntu.go:193] setting minikube options for container-runtime
	I1009 18:46:49.480030   17296 config.go:182] Loaded profile config "addons-814968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:46:49.480118   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.496977   17296 main.go:141] libmachine: Using SSH client type: native
	I1009 18:46:49.497143   17296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8651c0] 0x867ea0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1009 18:46:49.497159   17296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1009 18:46:49.724509   17296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1009 18:46:49.724535   17296 machine.go:96] duration metric: took 1.164678329s to provisionDockerMachine
	I1009 18:46:49.724548   17296 client.go:171] duration metric: took 14.169123577s to LocalClient.Create
	I1009 18:46:49.724568   17296 start.go:167] duration metric: took 14.169186307s to libmachine.API.Create "addons-814968"
	I1009 18:46:49.724581   17296 start.go:293] postStartSetup for "addons-814968" (driver="docker")
	I1009 18:46:49.724596   17296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1009 18:46:49.724673   17296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1009 18:46:49.724720   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.741883   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:49.839970   17296 ssh_runner.go:195] Run: cat /etc/os-release
	I1009 18:46:49.843041   17296 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1009 18:46:49.843072   17296 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1009 18:46:49.843080   17296 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1009 18:46:49.843086   17296 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1009 18:46:49.843098   17296 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9209/.minikube/addons for local assets ...
	I1009 18:46:49.843151   17296 filesync.go:126] Scanning /home/jenkins/minikube-integration/19780-9209/.minikube/files for local assets ...
	I1009 18:46:49.843176   17296 start.go:296] duration metric: took 118.585501ms for postStartSetup
	I1009 18:46:49.843472   17296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-814968
	I1009 18:46:49.860805   17296 profile.go:143] Saving config to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/config.json ...
	I1009 18:46:49.861060   17296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 18:46:49.861100   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.877297   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:49.967778   17296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1009 18:46:49.971777   17296 start.go:128] duration metric: took 14.41822137s to createHost
	I1009 18:46:49.971801   17296 start.go:83] releasing machines lock for "addons-814968", held for 14.418362266s
	I1009 18:46:49.971869   17296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-814968
	I1009 18:46:49.988762   17296 ssh_runner.go:195] Run: cat /version.json
	I1009 18:46:49.988789   17296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1009 18:46:49.988811   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:49.988841   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:46:50.006480   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:50.007223   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:46:50.177877   17296 ssh_runner.go:195] Run: systemctl --version
	I1009 18:46:50.181945   17296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1009 18:46:50.320325   17296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1009 18:46:50.324388   17296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:46:50.342667   17296 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1009 18:46:50.342737   17296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1009 18:46:50.368956   17296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1009 18:46:50.368979   17296 start.go:495] detecting cgroup driver to use...
	I1009 18:46:50.369009   17296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1009 18:46:50.369044   17296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1009 18:46:50.382328   17296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1009 18:46:50.392195   17296 docker.go:217] disabling cri-docker service (if available) ...
	I1009 18:46:50.392241   17296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1009 18:46:50.404432   17296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1009 18:46:50.417348   17296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1009 18:46:50.492127   17296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1009 18:46:50.567434   17296 docker.go:233] disabling docker service ...
	I1009 18:46:50.567489   17296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1009 18:46:50.584044   17296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1009 18:46:50.594617   17296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1009 18:46:50.672151   17296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1009 18:46:50.759493   17296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1009 18:46:50.770179   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1009 18:46:50.784254   17296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1009 18:46:50.784312   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.793299   17296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1009 18:46:50.793361   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.801925   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.810799   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.819300   17296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1009 18:46:50.827359   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.835535   17296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.848964   17296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1009 18:46:50.857549   17296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1009 18:46:50.864612   17296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1009 18:46:50.864657   17296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1009 18:46:50.877022   17296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1009 18:46:50.884627   17296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:46:50.954295   17296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1009 18:46:51.069922   17296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1009 18:46:51.069992   17296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1009 18:46:51.073466   17296 start.go:563] Will wait 60s for crictl version
	I1009 18:46:51.073515   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:46:51.076745   17296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1009 18:46:51.108427   17296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1009 18:46:51.108542   17296 ssh_runner.go:195] Run: crio --version
	I1009 18:46:51.142790   17296 ssh_runner.go:195] Run: crio --version
	I1009 18:46:51.178659   17296 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I1009 18:46:51.179851   17296 cli_runner.go:164] Run: docker network inspect addons-814968 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1009 18:46:51.196458   17296 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1009 18:46:51.199800   17296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:46:51.210543   17296 kubeadm.go:883] updating cluster {Name:addons-814968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1009 18:46:51.210687   17296 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I1009 18:46:51.210769   17296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:46:51.272988   17296 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:46:51.273009   17296 crio.go:433] Images already preloaded, skipping extraction
	I1009 18:46:51.273048   17296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1009 18:46:51.303644   17296 crio.go:514] all images are preloaded for cri-o runtime.
	I1009 18:46:51.303665   17296 cache_images.go:84] Images are preloaded, skipping loading
	I1009 18:46:51.303677   17296 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I1009 18:46:51.303765   17296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-814968 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1009 18:46:51.303821   17296 ssh_runner.go:195] Run: crio config
	I1009 18:46:51.343005   17296 cni.go:84] Creating CNI manager for ""
	I1009 18:46:51.343026   17296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:51.343041   17296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1009 18:46:51.343063   17296 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-814968 NodeName:addons-814968 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1009 18:46:51.343188   17296 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-814968"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1009 18:46:51.343269   17296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I1009 18:46:51.351467   17296 binaries.go:44] Found k8s binaries, skipping transfer
	I1009 18:46:51.351542   17296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1009 18:46:51.359309   17296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1009 18:46:51.374895   17296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1009 18:46:51.390849   17296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I1009 18:46:51.406596   17296 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1009 18:46:51.409776   17296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1009 18:46:51.419588   17296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:46:51.491686   17296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:46:51.503923   17296 certs.go:68] Setting up /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968 for IP: 192.168.49.2
	I1009 18:46:51.503947   17296 certs.go:194] generating shared ca certs ...
	I1009 18:46:51.503968   17296 certs.go:226] acquiring lock for ca certs: {Name:mkb239be22b48fcec8220567bb09be367227c7bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.504090   17296 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19780-9209/.minikube/ca.key
	I1009 18:46:51.586588   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/ca.crt ...
	I1009 18:46:51.586615   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/ca.crt: {Name:mk9017172016aab041c9d0974cc54ec89ffe8046 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.586796   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/ca.key ...
	I1009 18:46:51.586820   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/ca.key: {Name:mkcc0e54630796737c7e4ca6bb840db75ecb2612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.586927   17296 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.key
	I1009 18:46:51.760127   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.crt ...
	I1009 18:46:51.760157   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.crt: {Name:mkdc51782eb792306c095a5b9e06ed936f4f9db0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.760330   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.key ...
	I1009 18:46:51.760341   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.key: {Name:mk1c16156690aff81e7166e5eeab1762de0e570a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.760408   17296 certs.go:256] generating profile certs ...
	I1009 18:46:51.760461   17296 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.key
	I1009 18:46:51.760483   17296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt with IP's: []
	I1009 18:46:51.892598   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt ...
	I1009 18:46:51.892628   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: {Name:mkb6c9da8d44cf533327e70f97d5cdfad57104a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.892795   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.key ...
	I1009 18:46:51.892806   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.key: {Name:mk2cabfe4365d7f47d7f418a481b3f7a5010b79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:51.892873   17296 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key.c00f15e1
	I1009 18:46:51.892890   17296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt.c00f15e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1009 18:46:52.042557   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt.c00f15e1 ...
	I1009 18:46:52.042591   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt.c00f15e1: {Name:mkee029f9a4ac259898be0f264b9384438234bde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:52.042759   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key.c00f15e1 ...
	I1009 18:46:52.042773   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key.c00f15e1: {Name:mk6cdc8fdee962c0eb559ed3b23b985af4d63b00 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:52.042853   17296 certs.go:381] copying /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt.c00f15e1 -> /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt
	I1009 18:46:52.042922   17296 certs.go:385] copying /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key.c00f15e1 -> /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key
	I1009 18:46:52.042967   17296 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.key
	I1009 18:46:52.042983   17296 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.crt with IP's: []
	I1009 18:46:52.189346   17296 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.crt ...
	I1009 18:46:52.189383   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.crt: {Name:mk1982ae3e0f7bd30a28be5ea07e23a663ec466f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:52.189549   17296 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.key ...
	I1009 18:46:52.189565   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.key: {Name:mk9d212f973bc5ced33898bf3a0e82c2483498f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:46:52.189811   17296 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca-key.pem (1679 bytes)
	I1009 18:46:52.189859   17296 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/ca.pem (1078 bytes)
	I1009 18:46:52.189898   17296 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/cert.pem (1123 bytes)
	I1009 18:46:52.189932   17296 certs.go:484] found cert: /home/jenkins/minikube-integration/19780-9209/.minikube/certs/key.pem (1675 bytes)
	I1009 18:46:52.190530   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1009 18:46:52.214550   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1009 18:46:52.236308   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1009 18:46:52.258110   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1009 18:46:52.279209   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1009 18:46:52.300274   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1009 18:46:52.321280   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1009 18:46:52.342050   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1009 18:46:52.362928   17296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19780-9209/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1009 18:46:52.383777   17296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1009 18:46:52.399238   17296 ssh_runner.go:195] Run: openssl version
	I1009 18:46:52.404160   17296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1009 18:46:52.412993   17296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:46:52.416271   17296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct  9 18:46 /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:46:52.416327   17296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1009 18:46:52.422761   17296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1009 18:46:52.431548   17296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1009 18:46:52.434469   17296 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1009 18:46:52.434514   17296 kubeadm.go:392] StartCluster: {Name:addons-814968 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-814968 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:52.434588   17296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1009 18:46:52.434629   17296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1009 18:46:52.467544   17296 cri.go:89] found id: ""
	I1009 18:46:52.467598   17296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1009 18:46:52.475527   17296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1009 18:46:52.483237   17296 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1009 18:46:52.483299   17296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1009 18:46:52.490906   17296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1009 18:46:52.490926   17296 kubeadm.go:157] found existing configuration files:
	
	I1009 18:46:52.490968   17296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1009 18:46:52.498550   17296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1009 18:46:52.498605   17296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1009 18:46:52.506064   17296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1009 18:46:52.514186   17296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1009 18:46:52.514241   17296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1009 18:46:52.521731   17296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1009 18:46:52.529439   17296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1009 18:46:52.529495   17296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1009 18:46:52.537186   17296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1009 18:46:52.544985   17296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1009 18:46:52.545048   17296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1009 18:46:52.553329   17296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1009 18:46:52.586008   17296 kubeadm.go:310] W1009 18:46:52.585245    1292 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:46:52.586320   17296 kubeadm.go:310] W1009 18:46:52.585837    1292 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I1009 18:46:52.603599   17296 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I1009 18:46:52.650752   17296 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1009 18:47:01.045615   17296 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I1009 18:47:01.045706   17296 kubeadm.go:310] [preflight] Running pre-flight checks
	I1009 18:47:01.045826   17296 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1009 18:47:01.045897   17296 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I1009 18:47:01.045941   17296 kubeadm.go:310] OS: Linux
	I1009 18:47:01.046019   17296 kubeadm.go:310] CGROUPS_CPU: enabled
	I1009 18:47:01.046094   17296 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1009 18:47:01.046163   17296 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1009 18:47:01.046242   17296 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1009 18:47:01.046314   17296 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1009 18:47:01.046384   17296 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1009 18:47:01.046442   17296 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1009 18:47:01.046506   17296 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1009 18:47:01.046598   17296 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1009 18:47:01.046698   17296 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1009 18:47:01.046841   17296 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1009 18:47:01.046951   17296 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1009 18:47:01.047047   17296 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1009 18:47:01.048804   17296 out.go:235]   - Generating certificates and keys ...
	I1009 18:47:01.048908   17296 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1009 18:47:01.048995   17296 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1009 18:47:01.049086   17296 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1009 18:47:01.049155   17296 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1009 18:47:01.049211   17296 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1009 18:47:01.049262   17296 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1009 18:47:01.049317   17296 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1009 18:47:01.049461   17296 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-814968 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:01.049545   17296 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1009 18:47:01.049681   17296 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-814968 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1009 18:47:01.049764   17296 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1009 18:47:01.049850   17296 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1009 18:47:01.049906   17296 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1009 18:47:01.049980   17296 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1009 18:47:01.050054   17296 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1009 18:47:01.050133   17296 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1009 18:47:01.050213   17296 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1009 18:47:01.050303   17296 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1009 18:47:01.050381   17296 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1009 18:47:01.050483   17296 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1009 18:47:01.050586   17296 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1009 18:47:01.052273   17296 out.go:235]   - Booting up control plane ...
	I1009 18:47:01.052358   17296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1009 18:47:01.052426   17296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1009 18:47:01.052498   17296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1009 18:47:01.052614   17296 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1009 18:47:01.052698   17296 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1009 18:47:01.052739   17296 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1009 18:47:01.052845   17296 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1009 18:47:01.052950   17296 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1009 18:47:01.053019   17296 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 500.994243ms
	I1009 18:47:01.053116   17296 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1009 18:47:01.053167   17296 kubeadm.go:310] [api-check] The API server is healthy after 4.001905081s
	I1009 18:47:01.053259   17296 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1009 18:47:01.053371   17296 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1009 18:47:01.053421   17296 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1009 18:47:01.053581   17296 kubeadm.go:310] [mark-control-plane] Marking the node addons-814968 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1009 18:47:01.053631   17296 kubeadm.go:310] [bootstrap-token] Using token: a7saxq.a7xvj50z3lneobes
	I1009 18:47:01.055101   17296 out.go:235]   - Configuring RBAC rules ...
	I1009 18:47:01.055226   17296 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1009 18:47:01.055309   17296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1009 18:47:01.055428   17296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1009 18:47:01.055554   17296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1009 18:47:01.055654   17296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1009 18:47:01.055725   17296 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1009 18:47:01.055818   17296 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1009 18:47:01.055856   17296 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1009 18:47:01.055895   17296 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1009 18:47:01.055904   17296 kubeadm.go:310] 
	I1009 18:47:01.055957   17296 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1009 18:47:01.055963   17296 kubeadm.go:310] 
	I1009 18:47:01.056028   17296 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1009 18:47:01.056036   17296 kubeadm.go:310] 
	I1009 18:47:01.056065   17296 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1009 18:47:01.056114   17296 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1009 18:47:01.056161   17296 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1009 18:47:01.056167   17296 kubeadm.go:310] 
	I1009 18:47:01.056215   17296 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1009 18:47:01.056222   17296 kubeadm.go:310] 
	I1009 18:47:01.056265   17296 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1009 18:47:01.056272   17296 kubeadm.go:310] 
	I1009 18:47:01.056329   17296 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1009 18:47:01.056419   17296 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1009 18:47:01.056513   17296 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1009 18:47:01.056526   17296 kubeadm.go:310] 
	I1009 18:47:01.056626   17296 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1009 18:47:01.056794   17296 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1009 18:47:01.056815   17296 kubeadm.go:310] 
	I1009 18:47:01.056896   17296 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token a7saxq.a7xvj50z3lneobes \
	I1009 18:47:01.056985   17296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0f019d0380fedf73af6bbd9730211a8845b5739fb8c36385f8ca038fee98ec96 \
	I1009 18:47:01.057004   17296 kubeadm.go:310] 	--control-plane 
	I1009 18:47:01.057010   17296 kubeadm.go:310] 
	I1009 18:47:01.057087   17296 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1009 18:47:01.057094   17296 kubeadm.go:310] 
	I1009 18:47:01.057161   17296 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token a7saxq.a7xvj50z3lneobes \
	I1009 18:47:01.057256   17296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0f019d0380fedf73af6bbd9730211a8845b5739fb8c36385f8ca038fee98ec96 
	I1009 18:47:01.057288   17296 cni.go:84] Creating CNI manager for ""
	I1009 18:47:01.057294   17296 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:47:01.058783   17296 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1009 18:47:01.060104   17296 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1009 18:47:01.063803   17296 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I1009 18:47:01.063819   17296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1009 18:47:01.080841   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1009 18:47:01.270028   17296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1009 18:47:01.270088   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:01.270114   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-814968 minikube.k8s.io/updated_at=2024_10_09T18_47_01_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4 minikube.k8s.io/name=addons-814968 minikube.k8s.io/primary=true
	I1009 18:47:01.351931   17296 ops.go:34] apiserver oom_adj: -16
	I1009 18:47:01.352051   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:01.853135   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:02.352356   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:02.852505   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:03.352263   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:03.853151   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:04.352450   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:04.852424   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:05.352318   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:05.852544   17296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1009 18:47:05.913453   17296 kubeadm.go:1113] duration metric: took 4.643431535s to wait for elevateKubeSystemPrivileges
	I1009 18:47:05.913490   17296 kubeadm.go:394] duration metric: took 13.478978532s to StartCluster
	I1009 18:47:05.913519   17296 settings.go:142] acquiring lock: {Name:mk1ea3be815dc8fdbed3ad1d456d5a6e32d5dcd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:05.913619   17296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 18:47:05.913952   17296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19780-9209/kubeconfig: {Name:mk025fb048f06803d5f7ce2799ddfa736e063e97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1009 18:47:05.914122   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1009 18:47:05.914130   17296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1009 18:47:05.914214   17296 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1009 18:47:05.914333   17296 addons.go:69] Setting yakd=true in profile "addons-814968"
	I1009 18:47:05.914343   17296 config.go:182] Loaded profile config "addons-814968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:05.914351   17296 addons.go:234] Setting addon yakd=true in "addons-814968"
	I1009 18:47:05.914347   17296 addons.go:69] Setting default-storageclass=true in profile "addons-814968"
	I1009 18:47:05.914376   17296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-814968"
	I1009 18:47:05.914384   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914385   17296 addons.go:69] Setting cloud-spanner=true in profile "addons-814968"
	I1009 18:47:05.914396   17296 addons.go:234] Setting addon cloud-spanner=true in "addons-814968"
	I1009 18:47:05.914401   17296 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-814968"
	I1009 18:47:05.914421   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914410   17296 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-814968"
	I1009 18:47:05.914440   17296 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-814968"
	I1009 18:47:05.914445   17296 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-814968"
	I1009 18:47:05.914475   17296 addons.go:69] Setting ingress-dns=true in profile "addons-814968"
	I1009 18:47:05.914489   17296 addons.go:69] Setting inspektor-gadget=true in profile "addons-814968"
	I1009 18:47:05.914498   17296 addons.go:234] Setting addon ingress-dns=true in "addons-814968"
	I1009 18:47:05.914501   17296 addons.go:234] Setting addon inspektor-gadget=true in "addons-814968"
	I1009 18:47:05.914502   17296 addons.go:69] Setting volcano=true in profile "addons-814968"
	I1009 18:47:05.914516   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914524   17296 addons.go:234] Setting addon volcano=true in "addons-814968"
	I1009 18:47:05.914526   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914551   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914747   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914758   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914911   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914911   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914913   17296 addons.go:69] Setting volumesnapshots=true in profile "addons-814968"
	I1009 18:47:05.914929   17296 addons.go:234] Setting addon volumesnapshots=true in "addons-814968"
	I1009 18:47:05.914955   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914959   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.914975   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.915019   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.915351   17296 addons.go:69] Setting registry=true in profile "addons-814968"
	I1009 18:47:05.915376   17296 addons.go:234] Setting addon registry=true in "addons-814968"
	I1009 18:47:05.915377   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.915404   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.915597   17296 addons.go:69] Setting gcp-auth=true in profile "addons-814968"
	I1009 18:47:05.915646   17296 mustload.go:65] Loading cluster: addons-814968
	I1009 18:47:05.915671   17296 addons.go:69] Setting ingress=true in profile "addons-814968"
	I1009 18:47:05.915696   17296 addons.go:234] Setting addon ingress=true in "addons-814968"
	I1009 18:47:05.915750   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.915883   17296 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-814968"
	I1009 18:47:05.915915   17296 config.go:182] Loaded profile config "addons-814968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 18:47:05.915934   17296 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-814968"
	I1009 18:47:05.915960   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.916186   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.916228   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.916308   17296 addons.go:69] Setting storage-provisioner=true in profile "addons-814968"
	I1009 18:47:05.916334   17296 addons.go:234] Setting addon storage-provisioner=true in "addons-814968"
	I1009 18:47:05.916360   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.916390   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.916653   17296 out.go:177] * Verifying Kubernetes components...
	I1009 18:47:05.916714   17296 addons.go:69] Setting metrics-server=true in profile "addons-814968"
	I1009 18:47:05.916730   17296 addons.go:234] Setting addon metrics-server=true in "addons-814968"
	I1009 18:47:05.916761   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.914479   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.918399   17296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1009 18:47:05.947973   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.947988   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.948546   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.949133   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.978180   17296 addons.go:234] Setting addon default-storageclass=true in "addons-814968"
	I1009 18:47:05.978236   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.978757   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.982236   17296 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I1009 18:47:05.982876   17296 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-814968"
	I1009 18:47:05.982923   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:05.983826   17296 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1009 18:47:05.984321   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:05.986094   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1009 18:47:05.986115   17296 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1009 18:47:05.986188   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:05.986581   17296 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:05.986597   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1009 18:47:05.986637   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:05.994912   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	W1009 18:47:05.995373   17296 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1009 18:47:05.996589   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1009 18:47:05.996613   17296 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1009 18:47:05.996677   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:05.998086   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:06.001545   17296 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I1009 18:47:06.003173   17296 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:06.003225   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1009 18:47:06.003284   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.016009   17296 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I1009 18:47:06.016038   17296 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I1009 18:47:06.016168   17296 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1009 18:47:06.018668   17296 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:06.018714   17296 out.go:177]   - Using image docker.io/registry:2.8.3
	I1009 18:47:06.018689   17296 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1009 18:47:06.018906   17296 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1009 18:47:06.018994   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.020325   17296 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1009 18:47:06.020350   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1009 18:47:06.020421   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.021695   17296 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:06.023344   17296 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:06.023366   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1009 18:47:06.023422   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.031303   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1009 18:47:06.033309   17296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1009 18:47:06.033310   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1009 18:47:06.034244   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.034798   17296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:06.034816   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1009 18:47:06.034870   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.037963   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.040217   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1009 18:47:06.041863   17296 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1009 18:47:06.043823   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1009 18:47:06.043927   17296 out.go:177]   - Using image docker.io/busybox:stable
	I1009 18:47:06.045276   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1009 18:47:06.045496   17296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:06.045517   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1009 18:47:06.045584   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.045839   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.047095   17296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:06.047118   17296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1009 18:47:06.047164   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.047669   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1009 18:47:06.049262   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1009 18:47:06.051269   17296 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1009 18:47:06.051468   17296 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1009 18:47:06.053211   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1009 18:47:06.053233   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1009 18:47:06.053307   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.065549   17296 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1009 18:47:06.067758   17296 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1009 18:47:06.067326   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.068248   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.070588   17296 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1009 18:47:06.076532   17296 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:06.076568   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1009 18:47:06.076637   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:06.087177   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.092780   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.094972   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.095271   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.098312   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.098493   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.103259   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.104846   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:06.106504   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	W1009 18:47:06.130372   17296 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1009 18:47:06.130410   17296 retry.go:31] will retry after 363.568695ms: ssh: handshake failed: EOF
	I1009 18:47:06.248477   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1009 18:47:06.248636   17296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1009 18:47:06.425647   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1009 18:47:06.430920   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1009 18:47:06.430949   17296 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1009 18:47:06.440918   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1009 18:47:06.530026   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1009 18:47:06.530507   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1009 18:47:06.530576   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1009 18:47:06.536325   17296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1009 18:47:06.536411   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1009 18:47:06.542301   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1009 18:47:06.544180   17296 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1009 18:47:06.544252   17296 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1009 18:47:06.634039   17296 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1009 18:47:06.634063   17296 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1009 18:47:06.634162   17296 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1009 18:47:06.634169   17296 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1009 18:47:06.638142   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1009 18:47:06.725440   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1009 18:47:06.725488   17296 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1009 18:47:06.741571   17296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1009 18:47:06.741601   17296 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1009 18:47:06.743093   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1009 18:47:06.825477   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1009 18:47:06.825525   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1009 18:47:06.829207   17296 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1009 18:47:06.829330   17296 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1009 18:47:06.842990   17296 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I1009 18:47:06.843095   17296 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1009 18:47:06.925395   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1009 18:47:06.925483   17296 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1009 18:47:06.938857   17296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:06.938946   17296 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1009 18:47:06.948788   17296 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:06.948815   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1009 18:47:07.024885   17296 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1009 18:47:07.024914   17296 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1009 18:47:07.045213   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1009 18:47:07.045255   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1009 18:47:07.128395   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1009 18:47:07.146835   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1009 18:47:07.226324   17296 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:07.226348   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1009 18:47:07.226624   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1009 18:47:07.226637   17296 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1009 18:47:07.231415   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1009 18:47:07.339843   17296 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1009 18:47:07.339925   17296 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1009 18:47:07.425290   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1009 18:47:07.432103   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1009 18:47:07.432195   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1009 18:47:07.532048   17296 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:07.532138   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1009 18:47:07.631412   17296 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1009 18:47:07.631497   17296 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1009 18:47:07.729557   17296 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.480887366s)
	I1009 18:47:07.730664   17296 node_ready.go:35] waiting up to 6m0s for node "addons-814968" to be "Ready" ...
	I1009 18:47:07.730947   17296 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.482432475s)
	I1009 18:47:07.730997   17296 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1009 18:47:07.744576   17296 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1009 18:47:07.744653   17296 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1009 18:47:07.828462   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:07.941174   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1009 18:47:07.941258   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1009 18:47:08.027189   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.601503525s)
	I1009 18:47:08.226391   17296 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1009 18:47:08.226419   17296 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1009 18:47:08.325581   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1009 18:47:08.325611   17296 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1009 18:47:08.342390   17296 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-814968" context rescaled to 1 replicas
	I1009 18:47:08.540370   17296 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I1009 18:47:08.540465   17296 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I1009 18:47:08.543260   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1009 18:47:08.543326   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1009 18:47:08.733096   17296 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1009 18:47:08.733125   17296 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1009 18:47:08.836750   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1009 18:47:08.836779   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1009 18:47:08.942110   17296 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:08.942142   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I1009 18:47:09.124663   17296 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:09.124692   17296 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1009 18:47:09.228590   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1009 18:47:09.231463   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1009 18:47:09.328731   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.887710583s)
	I1009 18:47:09.748850   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:10.134369   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.60423985s)
	I1009 18:47:10.145121   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.6027235s)
	I1009 18:47:10.145148   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.50697111s)
	I1009 18:47:10.145189   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.402074182s)
	W1009 18:47:10.244988   17296 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1009 18:47:11.936422   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.807987062s)
	I1009 18:47:11.936803   17296 addons.go:475] Verifying addon ingress=true in "addons-814968"
	I1009 18:47:11.936809   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.511471288s)
	I1009 18:47:11.936742   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.705247522s)
	I1009 18:47:11.936985   17296 addons.go:475] Verifying addon registry=true in "addons-814968"
	I1009 18:47:11.936769   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.789811994s)
	I1009 18:47:11.937025   17296 addons.go:475] Verifying addon metrics-server=true in "addons-814968"
	I1009 18:47:11.938608   17296 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-814968 service yakd-dashboard -n yakd-dashboard
	
	I1009 18:47:11.939531   17296 out.go:177] * Verifying ingress addon...
	I1009 18:47:11.939536   17296 out.go:177] * Verifying registry addon...
	I1009 18:47:11.942036   17296 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1009 18:47:11.942228   17296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1009 18:47:11.947045   17296 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:47:11.947068   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:11.947367   17296 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1009 18:47:11.947389   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:12.236367   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:12.446623   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:12.447185   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:12.448705   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.620106938s)
	W1009 18:47:12.448745   17296 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:12.448768   17296 retry.go:31] will retry after 174.33179ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1009 18:47:12.448856   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.220230622s)
	I1009 18:47:12.624011   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1009 18:47:12.945193   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:12.946209   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:13.230751   17296 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1009 18:47:13.230831   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:13.252020   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (4.020456466s)
	I1009 18:47:13.252057   17296 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-814968"
	I1009 18:47:13.252473   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:13.253926   17296 out.go:177] * Verifying csi-hostpath-driver addon...
	I1009 18:47:13.255942   17296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1009 18:47:13.263890   17296 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:47:13.263913   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:13.438845   17296 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1009 18:47:13.445828   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:13.446302   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:13.457727   17296 addons.go:234] Setting addon gcp-auth=true in "addons-814968"
	I1009 18:47:13.457798   17296 host.go:66] Checking if "addons-814968" exists ...
	I1009 18:47:13.458126   17296 cli_runner.go:164] Run: docker container inspect addons-814968 --format={{.State.Status}}
	I1009 18:47:13.477774   17296 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1009 18:47:13.477835   17296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-814968
	I1009 18:47:13.496978   17296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/addons-814968/id_rsa Username:docker}
	I1009 18:47:13.759722   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:13.945407   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:13.945935   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:14.259539   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:14.444793   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:14.445361   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:14.733302   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:14.759286   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:14.946070   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:14.946533   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:15.260324   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:15.449667   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:15.525623   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:15.824818   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:15.945549   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:15.946194   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:16.133728   17296 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.509660538s)
	I1009 18:47:16.133773   17296 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.655964577s)
	I1009 18:47:16.136119   17296 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I1009 18:47:16.138068   17296 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I1009 18:47:16.139767   17296 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1009 18:47:16.139793   17296 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1009 18:47:16.158836   17296 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1009 18:47:16.158860   17296 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1009 18:47:16.176122   17296 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:16.176146   17296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1009 18:47:16.192803   17296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1009 18:47:16.259502   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:16.445021   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:16.445717   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:16.734774   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:16.759897   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:16.836262   17296 addons.go:475] Verifying addon gcp-auth=true in "addons-814968"
	I1009 18:47:16.837691   17296 out.go:177] * Verifying gcp-auth addon...
	I1009 18:47:16.839938   17296 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1009 18:47:16.859872   17296 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1009 18:47:16.859896   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:16.945945   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:16.946486   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:17.259298   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:17.343779   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:17.445759   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:17.446039   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:17.759321   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:17.843703   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:17.945506   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:17.945984   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:18.260214   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:18.343502   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:18.444786   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:18.445160   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:18.759105   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:18.843238   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:18.945379   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:18.945961   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:19.233259   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:19.259504   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:19.342817   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:19.446796   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:19.447700   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:19.759022   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:19.843137   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:19.945481   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:19.945976   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:20.259635   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:20.342794   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:20.445448   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:20.445780   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:20.759896   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:20.843377   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:20.944634   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:20.945217   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:21.234094   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:21.259684   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:21.343167   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:21.445782   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:21.446075   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:21.759138   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:21.843535   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:21.945086   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:21.945511   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:22.259434   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:22.342828   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:22.445436   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:22.445892   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:22.758821   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:22.842930   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:22.945533   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:22.945847   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:23.259452   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:23.342758   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:23.445295   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:23.445656   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:23.733531   17296 node_ready.go:53] node "addons-814968" has status "Ready":"False"
	I1009 18:47:23.759371   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:23.842472   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:23.944967   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:23.945359   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:24.258773   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:24.342971   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:24.445380   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:24.445776   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:24.736551   17296 node_ready.go:49] node "addons-814968" has status "Ready":"True"
	I1009 18:47:24.736575   17296 node_ready.go:38] duration metric: took 17.005838844s for node "addons-814968" to be "Ready" ...
	I1009 18:47:24.736584   17296 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:47:24.744602   17296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dcfpw" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:24.759976   17296 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1009 18:47:24.759999   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:24.852175   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:24.948047   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:24.948280   17296 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1009 18:47:24.948297   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:25.262018   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:25.425522   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:25.526821   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:25.527989   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:25.760317   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:25.843006   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:25.945560   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:25.946021   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:26.260702   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:26.343790   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:26.447269   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:26.447626   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:26.749365   17296 pod_ready.go:93] pod "coredns-7c65d6cfc9-dcfpw" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.749387   17296 pod_ready.go:82] duration metric: took 2.004695919s for pod "coredns-7c65d6cfc9-dcfpw" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.749413   17296 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.753259   17296 pod_ready.go:93] pod "etcd-addons-814968" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.753281   17296 pod_ready.go:82] duration metric: took 3.859154ms for pod "etcd-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.753296   17296 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.757336   17296 pod_ready.go:93] pod "kube-apiserver-addons-814968" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.757355   17296 pod_ready.go:82] duration metric: took 4.05242ms for pod "kube-apiserver-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.757364   17296 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.760308   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:26.761109   17296 pod_ready.go:93] pod "kube-controller-manager-addons-814968" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.761125   17296 pod_ready.go:82] duration metric: took 3.755076ms for pod "kube-controller-manager-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.761135   17296 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wprfw" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.764780   17296 pod_ready.go:93] pod "kube-proxy-wprfw" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:26.764798   17296 pod_ready.go:82] duration metric: took 3.657575ms for pod "kube-proxy-wprfw" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.764806   17296 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:26.860586   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:26.945696   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:26.946004   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:27.148107   17296 pod_ready.go:93] pod "kube-scheduler-addons-814968" in "kube-system" namespace has status "Ready":"True"
	I1009 18:47:27.148131   17296 pod_ready.go:82] duration metric: took 383.319465ms for pod "kube-scheduler-addons-814968" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:27.148141   17296 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace to be "Ready" ...
	I1009 18:47:27.261353   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:27.349074   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:27.446947   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:27.448175   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:27.837088   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:27.843257   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:27.948109   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:27.949237   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:28.260442   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:28.343002   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:28.446287   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:28.447557   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:28.760728   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:28.843589   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:28.945278   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:28.945780   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:29.154004   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:29.261640   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:29.343520   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:29.446488   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:29.447659   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:29.761146   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:29.843443   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:29.945471   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:29.945727   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:30.260630   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:30.343326   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:30.446651   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:30.447176   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:30.760370   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:30.843163   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:30.946390   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:30.946801   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:31.154792   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:31.260735   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:31.362287   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:31.461879   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:31.462166   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:31.760778   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:31.843460   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:31.945691   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:31.945865   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:32.260035   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:32.342713   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:32.446417   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:32.447321   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:32.761290   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:32.843300   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:32.946078   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:32.946339   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:33.261021   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:33.361077   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:33.445910   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:33.446441   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:33.653549   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:33.760532   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:33.843521   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:33.945537   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:33.945776   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:34.260597   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:34.343297   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:34.446438   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:34.446917   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:34.759735   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:34.843826   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:34.946501   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:34.947147   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:35.260835   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:35.342812   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:35.445887   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:35.446256   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:35.653831   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:35.761213   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:35.843780   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:35.946095   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:35.946746   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:36.260351   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:36.343302   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:36.446352   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:36.446482   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:36.760085   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:36.842997   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:36.946319   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:36.946457   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:37.259858   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:37.344101   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:37.446240   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:37.447073   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:37.654375   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:37.760063   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:37.843545   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:37.945936   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:37.946050   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:38.260595   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:38.343585   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:38.445878   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:38.446335   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:38.760088   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:38.843149   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:38.946467   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:38.946825   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:39.260408   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:39.343449   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:39.445719   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:39.445832   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:39.760735   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:39.861038   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:39.945993   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:39.946914   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:40.153821   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:40.260610   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:40.343670   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:40.445625   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:40.446505   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:40.761228   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:40.843782   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:40.946270   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:40.946782   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:41.264880   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:41.365653   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:41.445704   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:41.445948   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:41.760882   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:41.860846   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:41.946144   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:41.946375   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:42.154251   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:42.260526   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:42.343602   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:42.445414   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:42.445829   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:42.760248   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:42.843960   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:42.945874   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:42.946212   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.261236   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.343760   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:43.445666   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.446004   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:43.761367   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:43.843925   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:43.946157   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:43.946569   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.157719   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:44.260870   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:44.343648   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:44.446315   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:44.447509   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.760909   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:44.843599   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:44.945648   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:44.946044   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.260595   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.343824   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:45.445957   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.446192   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:45.760554   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:45.844255   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:45.946291   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:45.946662   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.260281   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.343037   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:46.447042   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:46.447614   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.653785   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:46.762104   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:46.862194   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:46.945957   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:46.946362   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.260329   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:47.343479   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.445447   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.445742   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:47.814338   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:47.944521   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:47.945492   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:47.945889   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.259922   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.342622   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:48.445652   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.446235   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:48.825496   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:48.843015   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:48.946092   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:48.946745   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.154437   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:49.259953   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.343304   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:49.447965   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:49.448020   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.831622   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:49.844681   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:49.947627   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:49.948357   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.260040   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.342910   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:50.446330   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:50.446930   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.760696   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:50.843459   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:50.945891   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:50.946292   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.155349   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:51.259721   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.343645   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:51.446561   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.447628   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:51.761132   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:51.843134   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:51.947089   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:51.947642   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.260439   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.344004   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:52.445777   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.446366   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:52.760091   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:52.842839   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:52.946010   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:52.946327   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.259798   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.343644   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:53.445722   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.445993   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:53.653951   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:53.760624   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:53.843690   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:53.945674   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:53.946141   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.260486   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.360334   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:54.446126   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.446415   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:54.760657   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:54.843572   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:54.945937   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:54.946347   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.261152   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.342911   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:55.446044   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.446707   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:55.760208   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:55.842710   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:55.946017   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:55.946363   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.154467   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:56.260735   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.343377   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:56.446873   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:56.447546   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.760307   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:56.843166   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:56.946231   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:56.946446   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.260745   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.359936   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:57.446044   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.446896   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:57.760711   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:57.844133   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:57.953069   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:57.953847   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.266805   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:47:58.268009   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.342958   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:58.445848   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.446207   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:58.760729   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:58.843831   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:58.945857   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:58.946233   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.260845   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.342473   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:47:59.445563   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:47:59.445784   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:47:59.833764   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:47:59.843671   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.025251   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1009 18:48:00.026958   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.331459   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.343401   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.446937   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:00.447876   17296 kapi.go:107] duration metric: took 48.505645503s to wait for kubernetes.io/minikube-addons=registry ...
	I1009 18:48:00.655877   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:00.828240   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:00.843871   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:00.948670   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.328473   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.344250   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:01.447373   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:01.826838   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:01.844012   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:01.946643   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.260901   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.343452   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:02.445427   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:02.760216   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:02.842922   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:02.946312   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.154235   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:03.261221   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.343823   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:03.446297   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:03.760629   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:03.843655   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:03.945900   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.260415   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.343387   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:04.446433   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:04.760872   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:04.843553   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:04.945613   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.260673   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.343279   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:05.445325   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:05.654683   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:05.760548   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:05.843082   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:05.946317   17296 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1009 18:48:06.260775   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.343462   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:06.527589   17296 kapi.go:107] duration metric: took 54.585549271s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1009 18:48:06.830614   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:06.843150   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:07.260177   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.343435   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:07.655453   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:07.760402   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:07.843152   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:08.261859   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.363118   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:08.761072   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:08.843661   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:09.260481   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.343494   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:09.788497   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:09.854862   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:10.153622   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:10.260563   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.343307   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:10.761204   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:10.843348   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:11.260472   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.360877   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:11.760438   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1009 18:48:11.843463   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:12.153835   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:12.260615   17296 kapi.go:107] duration metric: took 59.004669658s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1009 18:48:12.343341   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:12.843416   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:13.343137   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:13.843500   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:14.154877   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:14.343882   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:14.843625   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:15.343544   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:15.843109   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:16.343253   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:16.653107   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:16.843682   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:17.343872   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:17.843693   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:18.342897   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:18.653895   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:18.843279   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:19.343110   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:19.842717   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:20.343898   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:20.654036   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:20.843665   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:21.343105   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:21.843728   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:22.343627   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:22.843131   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.154817   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:23.343306   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:23.843650   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:24.342878   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:24.843169   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:25.343091   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:25.653774   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:25.843398   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:26.343625   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:26.842860   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:27.343768   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:27.843773   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:28.154240   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:28.343018   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:28.843701   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:29.343688   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:29.844325   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:30.155325   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:30.343039   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:30.844055   17296 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1009 18:48:31.343489   17296 kapi.go:107] duration metric: took 1m14.50354887s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1009 18:48:31.345687   17296 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-814968 cluster.
	I1009 18:48:31.347380   17296 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1009 18:48:31.348888   17296 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1009 18:48:31.350701   17296 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I1009 18:48:31.352263   17296 addons.go:510] duration metric: took 1m25.438051425s for enable addons: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher metrics-server yakd inspektor-gadget volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I1009 18:48:32.653863   17296 pod_ready.go:103] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"False"
	I1009 18:48:35.154466   17296 pod_ready.go:93] pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:35.154498   17296 pod_ready.go:82] duration metric: took 1m8.006349266s for pod "metrics-server-84c5f94fbc-5gbfm" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:35.154511   17296 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7txf4" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:35.159453   17296 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7txf4" in "kube-system" namespace has status "Ready":"True"
	I1009 18:48:35.159481   17296 pod_ready.go:82] duration metric: took 4.961783ms for pod "nvidia-device-plugin-daemonset-7txf4" in "kube-system" namespace to be "Ready" ...
	I1009 18:48:35.159507   17296 pod_ready.go:39] duration metric: took 1m10.422897734s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1009 18:48:35.159528   17296 api_server.go:52] waiting for apiserver process to appear ...
	I1009 18:48:35.159565   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:48:35.159630   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:48:35.194557   17296 cri.go:89] found id: "16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:35.194582   17296 cri.go:89] found id: ""
	I1009 18:48:35.194592   17296 logs.go:282] 1 containers: [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c]
	I1009 18:48:35.194645   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.197956   17296 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:48:35.198021   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:48:35.231379   17296 cri.go:89] found id: "1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:35.231399   17296 cri.go:89] found id: ""
	I1009 18:48:35.231408   17296 logs.go:282] 1 containers: [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38]
	I1009 18:48:35.231466   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.234767   17296 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:48:35.234839   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:48:35.269878   17296 cri.go:89] found id: "02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:35.269900   17296 cri.go:89] found id: ""
	I1009 18:48:35.269907   17296 logs.go:282] 1 containers: [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b]
	I1009 18:48:35.269959   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.273465   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:48:35.273534   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:48:35.307585   17296 cri.go:89] found id: "221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:35.307609   17296 cri.go:89] found id: ""
	I1009 18:48:35.307620   17296 logs.go:282] 1 containers: [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915]
	I1009 18:48:35.307671   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.311029   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:48:35.311088   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:48:35.345746   17296 cri.go:89] found id: "2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:35.345769   17296 cri.go:89] found id: ""
	I1009 18:48:35.345777   17296 logs.go:282] 1 containers: [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1]
	I1009 18:48:35.345823   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.349300   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:48:35.349379   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:48:35.383274   17296 cri.go:89] found id: "6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:35.383302   17296 cri.go:89] found id: ""
	I1009 18:48:35.383313   17296 logs.go:282] 1 containers: [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867]
	I1009 18:48:35.383374   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.386711   17296 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:48:35.386773   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:48:35.419254   17296 cri.go:89] found id: "f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:35.419281   17296 cri.go:89] found id: ""
	I1009 18:48:35.419292   17296 logs.go:282] 1 containers: [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c]
	I1009 18:48:35.419349   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:35.422688   17296 logs.go:123] Gathering logs for kindnet [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c] ...
	I1009 18:48:35.422711   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:35.455388   17296 logs.go:123] Gathering logs for container status ...
	I1009 18:48:35.455414   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:48:35.495741   17296 logs.go:123] Gathering logs for etcd [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38] ...
	I1009 18:48:35.495767   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:35.536322   17296 logs.go:123] Gathering logs for kube-proxy [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1] ...
	I1009 18:48:35.536364   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:35.570198   17296 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:48:35.570224   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:48:35.665335   17296 logs.go:123] Gathering logs for kube-apiserver [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c] ...
	I1009 18:48:35.665365   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:35.709971   17296 logs.go:123] Gathering logs for coredns [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b] ...
	I1009 18:48:35.710008   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:35.744816   17296 logs.go:123] Gathering logs for kube-scheduler [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915] ...
	I1009 18:48:35.744843   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:35.786298   17296 logs.go:123] Gathering logs for kube-controller-manager [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867] ...
	I1009 18:48:35.786339   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:35.843800   17296 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:48:35.843834   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:48:35.916834   17296 logs.go:123] Gathering logs for kubelet ...
	I1009 18:48:35.916878   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:48:35.965387   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:35.965586   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:35.997670   17296 logs.go:123] Gathering logs for dmesg ...
	I1009 18:48:35.997708   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:48:36.010199   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:36.010221   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:48:36.010298   17296 out.go:270] X Problems detected in kubelet:
	W1009 18:48:36.010310   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:36.010317   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:36.010329   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:36.010339   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:48:46.011629   17296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 18:48:46.024857   17296 api_server.go:72] duration metric: took 1m40.110703672s to wait for apiserver process to appear ...
	I1009 18:48:46.024883   17296 api_server.go:88] waiting for apiserver healthz status ...
	I1009 18:48:46.024915   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:48:46.024970   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:48:46.058499   17296 cri.go:89] found id: "16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:46.058520   17296 cri.go:89] found id: ""
	I1009 18:48:46.058527   17296 logs.go:282] 1 containers: [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c]
	I1009 18:48:46.058574   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.061901   17296 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:48:46.061978   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:48:46.094795   17296 cri.go:89] found id: "1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:46.094816   17296 cri.go:89] found id: ""
	I1009 18:48:46.094824   17296 logs.go:282] 1 containers: [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38]
	I1009 18:48:46.094869   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.098067   17296 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:48:46.098128   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:48:46.130361   17296 cri.go:89] found id: "02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:46.130385   17296 cri.go:89] found id: ""
	I1009 18:48:46.130393   17296 logs.go:282] 1 containers: [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b]
	I1009 18:48:46.130438   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.133643   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:48:46.133701   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:48:46.168196   17296 cri.go:89] found id: "221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:46.168219   17296 cri.go:89] found id: ""
	I1009 18:48:46.168227   17296 logs.go:282] 1 containers: [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915]
	I1009 18:48:46.168294   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.171547   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:48:46.171605   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:48:46.205084   17296 cri.go:89] found id: "2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:46.205110   17296 cri.go:89] found id: ""
	I1009 18:48:46.205118   17296 logs.go:282] 1 containers: [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1]
	I1009 18:48:46.205161   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.208419   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:48:46.208484   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:48:46.241599   17296 cri.go:89] found id: "6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:46.241621   17296 cri.go:89] found id: ""
	I1009 18:48:46.241631   17296 logs.go:282] 1 containers: [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867]
	I1009 18:48:46.241685   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.245016   17296 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:48:46.245073   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:48:46.278801   17296 cri.go:89] found id: "f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:46.278821   17296 cri.go:89] found id: ""
	I1009 18:48:46.278829   17296 logs.go:282] 1 containers: [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c]
	I1009 18:48:46.278872   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:46.282257   17296 logs.go:123] Gathering logs for etcd [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38] ...
	I1009 18:48:46.282285   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:46.322549   17296 logs.go:123] Gathering logs for coredns [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b] ...
	I1009 18:48:46.322587   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:46.357924   17296 logs.go:123] Gathering logs for kube-scheduler [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915] ...
	I1009 18:48:46.357958   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:46.397521   17296 logs.go:123] Gathering logs for kube-controller-manager [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867] ...
	I1009 18:48:46.397555   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:46.457165   17296 logs.go:123] Gathering logs for kindnet [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c] ...
	I1009 18:48:46.457201   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:46.490524   17296 logs.go:123] Gathering logs for kubelet ...
	I1009 18:48:46.490552   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:48:46.535478   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:46.535658   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:46.572731   17296 logs.go:123] Gathering logs for dmesg ...
	I1009 18:48:46.572775   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:48:46.584660   17296 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:48:46.584688   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:48:46.684444   17296 logs.go:123] Gathering logs for container status ...
	I1009 18:48:46.684475   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:48:46.726249   17296 logs.go:123] Gathering logs for kube-apiserver [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c] ...
	I1009 18:48:46.726275   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:46.771681   17296 logs.go:123] Gathering logs for kube-proxy [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1] ...
	I1009 18:48:46.771728   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:46.806520   17296 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:48:46.806561   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:48:46.881346   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:46.881380   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:48:46.881439   17296 out.go:270] X Problems detected in kubelet:
	W1009 18:48:46.881447   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:46.881454   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:46.881460   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:46.881467   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:48:56.881995   17296 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1009 18:48:56.886467   17296 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1009 18:48:56.887447   17296 api_server.go:141] control plane version: v1.31.1
	I1009 18:48:56.887474   17296 api_server.go:131] duration metric: took 10.862584003s to wait for apiserver health ...
	I1009 18:48:56.887487   17296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1009 18:48:56.887597   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1009 18:48:56.887677   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1009 18:48:56.921141   17296 cri.go:89] found id: "16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:56.921166   17296 cri.go:89] found id: ""
	I1009 18:48:56.921175   17296 logs.go:282] 1 containers: [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c]
	I1009 18:48:56.921222   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:56.924386   17296 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1009 18:48:56.924458   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1009 18:48:56.957508   17296 cri.go:89] found id: "1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:56.957532   17296 cri.go:89] found id: ""
	I1009 18:48:56.957540   17296 logs.go:282] 1 containers: [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38]
	I1009 18:48:56.957585   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:56.960906   17296 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1009 18:48:56.960966   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1009 18:48:56.994274   17296 cri.go:89] found id: "02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:56.994302   17296 cri.go:89] found id: ""
	I1009 18:48:56.994312   17296 logs.go:282] 1 containers: [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b]
	I1009 18:48:56.994370   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:56.998013   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1009 18:48:56.998083   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1009 18:48:57.031708   17296 cri.go:89] found id: "221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:57.031728   17296 cri.go:89] found id: ""
	I1009 18:48:57.031734   17296 logs.go:282] 1 containers: [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915]
	I1009 18:48:57.031786   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:57.035185   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1009 18:48:57.035275   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1009 18:48:57.071177   17296 cri.go:89] found id: "2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:57.071223   17296 cri.go:89] found id: ""
	I1009 18:48:57.071234   17296 logs.go:282] 1 containers: [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1]
	I1009 18:48:57.071296   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:57.074708   17296 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1009 18:48:57.074773   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1009 18:48:57.110767   17296 cri.go:89] found id: "6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:57.110787   17296 cri.go:89] found id: ""
	I1009 18:48:57.110796   17296 logs.go:282] 1 containers: [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867]
	I1009 18:48:57.110851   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:57.114310   17296 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1009 18:48:57.114378   17296 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1009 18:48:57.152783   17296 cri.go:89] found id: "f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:57.152802   17296 cri.go:89] found id: ""
	I1009 18:48:57.152808   17296 logs.go:282] 1 containers: [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c]
	I1009 18:48:57.152854   17296 ssh_runner.go:195] Run: which crictl
	I1009 18:48:57.156527   17296 logs.go:123] Gathering logs for kube-controller-manager [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867] ...
	I1009 18:48:57.156549   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867"
	I1009 18:48:57.211216   17296 logs.go:123] Gathering logs for CRI-O ...
	I1009 18:48:57.211253   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1009 18:48:57.288037   17296 logs.go:123] Gathering logs for etcd [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38] ...
	I1009 18:48:57.288078   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38"
	I1009 18:48:57.330258   17296 logs.go:123] Gathering logs for coredns [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b] ...
	I1009 18:48:57.330290   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b"
	I1009 18:48:57.369141   17296 logs.go:123] Gathering logs for kube-scheduler [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915] ...
	I1009 18:48:57.369185   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915"
	I1009 18:48:57.409572   17296 logs.go:123] Gathering logs for kube-proxy [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1] ...
	I1009 18:48:57.409605   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1"
	I1009 18:48:57.442324   17296 logs.go:123] Gathering logs for kubelet ...
	I1009 18:48:57.442359   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1009 18:48:57.487455   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:57.487640   17296 logs.go:138] Found kubelet problem: Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:57.519372   17296 logs.go:123] Gathering logs for dmesg ...
	I1009 18:48:57.519410   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1009 18:48:57.531766   17296 logs.go:123] Gathering logs for describe nodes ...
	I1009 18:48:57.531800   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1009 18:48:57.630041   17296 logs.go:123] Gathering logs for kube-apiserver [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c] ...
	I1009 18:48:57.630077   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c"
	I1009 18:48:57.673703   17296 logs.go:123] Gathering logs for kindnet [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c] ...
	I1009 18:48:57.673733   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c"
	I1009 18:48:57.708491   17296 logs.go:123] Gathering logs for container status ...
	I1009 18:48:57.708522   17296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1009 18:48:57.749802   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:57.749824   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1009 18:48:57.749877   17296 out.go:270] X Problems detected in kubelet:
	W1009 18:48:57.749890   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: W1009 18:47:06.043067    1623 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-814968" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-814968' and this object
	W1009 18:48:57.749901   17296 out.go:270]   Oct 09 18:47:06 addons-814968 kubelet[1623]: E1009 18:47:06.043134    1623 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-814968\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-814968' and this object" logger="UnhandledError"
	I1009 18:48:57.749910   17296 out.go:358] Setting ErrFile to fd 2...
	I1009 18:48:57.749915   17296 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:49:07.760079   17296 system_pods.go:59] 18 kube-system pods found
	I1009 18:49:07.760124   17296 system_pods.go:61] "coredns-7c65d6cfc9-dcfpw" [ab2ddf3f-03de-4761-947c-d307eb22d417] Running
	I1009 18:49:07.760136   17296 system_pods.go:61] "csi-hostpath-attacher-0" [e272e252-86b0-4468-9131-dca02745720a] Running
	I1009 18:49:07.760141   17296 system_pods.go:61] "csi-hostpath-resizer-0" [bfd004fc-a591-4578-b359-f70ef5724f11] Running
	I1009 18:49:07.760146   17296 system_pods.go:61] "csi-hostpathplugin-fqb8x" [2f8a767d-d27d-4ba0-8919-fdc68455832c] Running
	I1009 18:49:07.760152   17296 system_pods.go:61] "etcd-addons-814968" [5100735e-81ed-4e86-9da0-3f7f79a02d4f] Running
	I1009 18:49:07.760157   17296 system_pods.go:61] "kindnet-mdrqx" [d90881e9-cfe6-4d42-8003-9efb160a7937] Running
	I1009 18:49:07.760162   17296 system_pods.go:61] "kube-apiserver-addons-814968" [315b151b-2aca-4e06-8c8a-e81807aa1638] Running
	I1009 18:49:07.760168   17296 system_pods.go:61] "kube-controller-manager-addons-814968" [0882300f-9693-46ce-a584-9712095a27ed] Running
	I1009 18:49:07.760176   17296 system_pods.go:61] "kube-ingress-dns-minikube" [5fd07203-977b-4e7c-b6db-81030c0af955] Running
	I1009 18:49:07.760183   17296 system_pods.go:61] "kube-proxy-wprfw" [9204c10f-c636-4846-8ee8-46635c3324e2] Running
	I1009 18:49:07.760191   17296 system_pods.go:61] "kube-scheduler-addons-814968" [b4efbf7d-41ce-447a-80d1-6d4fe68f3f0c] Running
	I1009 18:49:07.760197   17296 system_pods.go:61] "metrics-server-84c5f94fbc-5gbfm" [aecf0efb-0d9b-429c-82bb-0aa04751f7f0] Running
	I1009 18:49:07.760204   17296 system_pods.go:61] "nvidia-device-plugin-daemonset-7txf4" [91c3baad-6ee1-4595-bce6-7b2db5cb9cd3] Running
	I1009 18:49:07.760210   17296 system_pods.go:61] "registry-66c9cd494c-s2zbn" [e5e37670-4f6a-48d7-8ec0-96a1df679765] Running
	I1009 18:49:07.760218   17296 system_pods.go:61] "registry-proxy-zpr6p" [1a3e151b-470d-420f-a50b-d42194bf9620] Running
	I1009 18:49:07.760224   17296 system_pods.go:61] "snapshot-controller-56fcc65765-5z6gs" [4ed3dbbb-226e-4b73-bd8b-8bb50514d365] Running
	I1009 18:49:07.760233   17296 system_pods.go:61] "snapshot-controller-56fcc65765-l6fk4" [1f1a2f1f-a768-4156-b406-731c3890ec0f] Running
	I1009 18:49:07.760239   17296 system_pods.go:61] "storage-provisioner" [522ad8d0-bab3-4c94-9914-42a4afc097ba] Running
	I1009 18:49:07.760249   17296 system_pods.go:74] duration metric: took 10.87275449s to wait for pod list to return data ...
	I1009 18:49:07.760261   17296 default_sa.go:34] waiting for default service account to be created ...
	I1009 18:49:07.762809   17296 default_sa.go:45] found service account: "default"
	I1009 18:49:07.762830   17296 default_sa.go:55] duration metric: took 2.560915ms for default service account to be created ...
	I1009 18:49:07.762837   17296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1009 18:49:07.771494   17296 system_pods.go:86] 18 kube-system pods found
	I1009 18:49:07.771528   17296 system_pods.go:89] "coredns-7c65d6cfc9-dcfpw" [ab2ddf3f-03de-4761-947c-d307eb22d417] Running
	I1009 18:49:07.771536   17296 system_pods.go:89] "csi-hostpath-attacher-0" [e272e252-86b0-4468-9131-dca02745720a] Running
	I1009 18:49:07.771542   17296 system_pods.go:89] "csi-hostpath-resizer-0" [bfd004fc-a591-4578-b359-f70ef5724f11] Running
	I1009 18:49:07.771547   17296 system_pods.go:89] "csi-hostpathplugin-fqb8x" [2f8a767d-d27d-4ba0-8919-fdc68455832c] Running
	I1009 18:49:07.771552   17296 system_pods.go:89] "etcd-addons-814968" [5100735e-81ed-4e86-9da0-3f7f79a02d4f] Running
	I1009 18:49:07.771558   17296 system_pods.go:89] "kindnet-mdrqx" [d90881e9-cfe6-4d42-8003-9efb160a7937] Running
	I1009 18:49:07.771563   17296 system_pods.go:89] "kube-apiserver-addons-814968" [315b151b-2aca-4e06-8c8a-e81807aa1638] Running
	I1009 18:49:07.771570   17296 system_pods.go:89] "kube-controller-manager-addons-814968" [0882300f-9693-46ce-a584-9712095a27ed] Running
	I1009 18:49:07.771577   17296 system_pods.go:89] "kube-ingress-dns-minikube" [5fd07203-977b-4e7c-b6db-81030c0af955] Running
	I1009 18:49:07.771582   17296 system_pods.go:89] "kube-proxy-wprfw" [9204c10f-c636-4846-8ee8-46635c3324e2] Running
	I1009 18:49:07.771589   17296 system_pods.go:89] "kube-scheduler-addons-814968" [b4efbf7d-41ce-447a-80d1-6d4fe68f3f0c] Running
	I1009 18:49:07.771597   17296 system_pods.go:89] "metrics-server-84c5f94fbc-5gbfm" [aecf0efb-0d9b-429c-82bb-0aa04751f7f0] Running
	I1009 18:49:07.771605   17296 system_pods.go:89] "nvidia-device-plugin-daemonset-7txf4" [91c3baad-6ee1-4595-bce6-7b2db5cb9cd3] Running
	I1009 18:49:07.771611   17296 system_pods.go:89] "registry-66c9cd494c-s2zbn" [e5e37670-4f6a-48d7-8ec0-96a1df679765] Running
	I1009 18:49:07.771617   17296 system_pods.go:89] "registry-proxy-zpr6p" [1a3e151b-470d-420f-a50b-d42194bf9620] Running
	I1009 18:49:07.771623   17296 system_pods.go:89] "snapshot-controller-56fcc65765-5z6gs" [4ed3dbbb-226e-4b73-bd8b-8bb50514d365] Running
	I1009 18:49:07.771629   17296 system_pods.go:89] "snapshot-controller-56fcc65765-l6fk4" [1f1a2f1f-a768-4156-b406-731c3890ec0f] Running
	I1009 18:49:07.771635   17296 system_pods.go:89] "storage-provisioner" [522ad8d0-bab3-4c94-9914-42a4afc097ba] Running
	I1009 18:49:07.771645   17296 system_pods.go:126] duration metric: took 8.802073ms to wait for k8s-apps to be running ...
	I1009 18:49:07.771658   17296 system_svc.go:44] waiting for kubelet service to be running ....
	I1009 18:49:07.771712   17296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 18:49:07.783055   17296 system_svc.go:56] duration metric: took 11.385735ms WaitForService to wait for kubelet
	I1009 18:49:07.783080   17296 kubeadm.go:582] duration metric: took 2m1.8689302s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1009 18:49:07.783098   17296 node_conditions.go:102] verifying NodePressure condition ...
	I1009 18:49:07.786198   17296 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1009 18:49:07.786246   17296 node_conditions.go:123] node cpu capacity is 8
	I1009 18:49:07.786260   17296 node_conditions.go:105] duration metric: took 3.157884ms to run NodePressure ...
	I1009 18:49:07.786271   17296 start.go:241] waiting for startup goroutines ...
	I1009 18:49:07.786278   17296 start.go:246] waiting for cluster config update ...
	I1009 18:49:07.786294   17296 start.go:255] writing updated cluster config ...
	I1009 18:49:07.786596   17296 ssh_runner.go:195] Run: rm -f paused
	I1009 18:49:07.837121   17296 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I1009 18:49:07.839371   17296 out.go:177] * Done! kubectl is now configured to use "addons-814968" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 09 19:00:21 addons-814968 crio[1029]: time="2024-10-09 19:00:21.454097154Z" level=info msg="Creating container: default/busybox/busybox" id=98fda8e5-a0de-458b-93dd-6e81d2873fe3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:00:21 addons-814968 crio[1029]: time="2024-10-09 19:00:21.454185717Z" level=warning msg="Allowed annotations are specified for workload []"
	Oct 09 19:00:21 addons-814968 crio[1029]: time="2024-10-09 19:00:21.502016936Z" level=info msg="Created container 68b5677df9d5bdb67545023049c015ca8ee3d78320a064773018ee9dd9da8ccc: default/busybox/busybox" id=98fda8e5-a0de-458b-93dd-6e81d2873fe3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 09 19:00:21 addons-814968 crio[1029]: time="2024-10-09 19:00:21.502634586Z" level=info msg="Starting container: 68b5677df9d5bdb67545023049c015ca8ee3d78320a064773018ee9dd9da8ccc" id=fb1d0178-8a75-4988-954d-18760e5fe645 name=/runtime.v1.RuntimeService/StartContainer
	Oct 09 19:00:21 addons-814968 crio[1029]: time="2024-10-09 19:00:21.507979526Z" level=info msg="Started container" PID=17781 containerID=68b5677df9d5bdb67545023049c015ca8ee3d78320a064773018ee9dd9da8ccc description=default/busybox/busybox id=fb1d0178-8a75-4988-954d-18760e5fe645 name=/runtime.v1.RuntimeService/StartContainer sandboxID=38774f451598c62d7154238b409035b1b7ca5805eb43ca93c88dc2eab5ba27bc
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.642675487Z" level=info msg="Removing container: cf7b6540335cc74d669e696fae8fa539d667338a5e5d2bbe39f7995c8927426a" id=f2bbee04-a927-4b56-a0d0-5dff4018935e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.655567575Z" level=info msg="Removed container cf7b6540335cc74d669e696fae8fa539d667338a5e5d2bbe39f7995c8927426a: ingress-nginx/ingress-nginx-admission-patch-lk7hx/patch" id=f2bbee04-a927-4b56-a0d0-5dff4018935e name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.656885399Z" level=info msg="Removing container: 586088eebfd1663d705834381461506296a52b567f7a74cf3386741b7495bb78" id=4599f03e-80ee-4715-8b85-ec53b72c9a63 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.671723553Z" level=info msg="Removed container 586088eebfd1663d705834381461506296a52b567f7a74cf3386741b7495bb78: ingress-nginx/ingress-nginx-admission-create-snl7p/create" id=4599f03e-80ee-4715-8b85-ec53b72c9a63 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.672961382Z" level=info msg="Stopping pod sandbox: c944a5188b1ddb754f6e2efd05a7d558569805b612ea656f87ba9d53426e137d" id=4cdbbca3-125e-4463-83f4-ffbecde53908 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.673000174Z" level=info msg="Stopped pod sandbox (already stopped): c944a5188b1ddb754f6e2efd05a7d558569805b612ea656f87ba9d53426e137d" id=4cdbbca3-125e-4463-83f4-ffbecde53908 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.673301949Z" level=info msg="Removing pod sandbox: c944a5188b1ddb754f6e2efd05a7d558569805b612ea656f87ba9d53426e137d" id=c76593d3-9e65-4db3-8914-f6cd212b6086 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.680095478Z" level=info msg="Removed pod sandbox: c944a5188b1ddb754f6e2efd05a7d558569805b612ea656f87ba9d53426e137d" id=c76593d3-9e65-4db3-8914-f6cd212b6086 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.680612130Z" level=info msg="Stopping pod sandbox: c23f11210a3a3d8b66164a00bdc570f5d38a7c43331423d39d47943f7cdb8f84" id=5cda083b-19bd-497a-981b-c4e1510a8782 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.680653280Z" level=info msg="Stopped pod sandbox (already stopped): c23f11210a3a3d8b66164a00bdc570f5d38a7c43331423d39d47943f7cdb8f84" id=5cda083b-19bd-497a-981b-c4e1510a8782 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.680971124Z" level=info msg="Removing pod sandbox: c23f11210a3a3d8b66164a00bdc570f5d38a7c43331423d39d47943f7cdb8f84" id=d35c2ad1-60c0-4fff-aad7-c0f9e1a33db1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.687585762Z" level=info msg="Removed pod sandbox: c23f11210a3a3d8b66164a00bdc570f5d38a7c43331423d39d47943f7cdb8f84" id=d35c2ad1-60c0-4fff-aad7-c0f9e1a33db1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.688037043Z" level=info msg="Stopping pod sandbox: f5cf1b38c4ff618174eeb7b55b6669435d0066db50c09b2d2653336c56117277" id=acebee00-4322-47a8-9100-9a605f98a22d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.688067191Z" level=info msg="Stopped pod sandbox (already stopped): f5cf1b38c4ff618174eeb7b55b6669435d0066db50c09b2d2653336c56117277" id=acebee00-4322-47a8-9100-9a605f98a22d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.688404894Z" level=info msg="Removing pod sandbox: f5cf1b38c4ff618174eeb7b55b6669435d0066db50c09b2d2653336c56117277" id=b68250c9-10a4-4d20-9813-3886a2927476 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.694936946Z" level=info msg="Removed pod sandbox: f5cf1b38c4ff618174eeb7b55b6669435d0066db50c09b2d2653336c56117277" id=b68250c9-10a4-4d20-9813-3886a2927476 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.695457865Z" level=info msg="Stopping pod sandbox: 5026de9c2694c7dde440b5f50e36141610ca15370dae1c9ffe7f902d40d39113" id=af10c2dc-a772-48e7-9eb3-20bf53ef51cf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.695497092Z" level=info msg="Stopped pod sandbox (already stopped): 5026de9c2694c7dde440b5f50e36141610ca15370dae1c9ffe7f902d40d39113" id=af10c2dc-a772-48e7-9eb3-20bf53ef51cf name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.695779731Z" level=info msg="Removing pod sandbox: 5026de9c2694c7dde440b5f50e36141610ca15370dae1c9ffe7f902d40d39113" id=3e61d8dc-71ad-4680-9afb-565cdd9ac676 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 09 19:01:00 addons-814968 crio[1029]: time="2024-10-09 19:01:00.702033712Z" level=info msg="Removed pod sandbox: 5026de9c2694c7dde440b5f50e36141610ca15370dae1c9ffe7f902d40d39113" id=3e61d8dc-71ad-4680-9afb-565cdd9ac676 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	68b5677df9d5b       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     About a minute ago   Running             busybox                   0                   38774f451598c       busybox
	834b53bede5f3       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago        Running             hello-world-app           0                   67b3815abd821       hello-world-app-55bf9c44b4-rxzq2
	e6c1bd6a1c201       docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250                         4 minutes ago        Running             nginx                     0                   44f6c5dfda074       nginx
	67672097bfd6f       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   14 minutes ago       Running             metrics-server            0                   2d3b55b67f56b       metrics-server-84c5f94fbc-5gbfm
	02903fa33ba6d       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        14 minutes ago       Running             coredns                   0                   886685be67e39       coredns-7c65d6cfc9-dcfpw
	8caeb8fad85ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        14 minutes ago       Running             storage-provisioner       0                   db8294a55800d       storage-provisioner
	f2f6ada66ed91       docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387                      14 minutes ago       Running             kindnet-cni               0                   f9793dc7e2762       kindnet-mdrqx
	2ecd337cc588b       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        15 minutes ago       Running             kube-proxy                0                   2d910828bcffc       kube-proxy-wprfw
	1fa69ee53f8ff       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        15 minutes ago       Running             etcd                      0                   20a300e46e71c       etcd-addons-814968
	221dded81f0de       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        15 minutes ago       Running             kube-scheduler            0                   58088163aa98e       kube-scheduler-addons-814968
	6851332d0dffc       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        15 minutes ago       Running             kube-controller-manager   0                   9c018b10e40e0       kube-controller-manager-addons-814968
	16933cbf0d802       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        15 minutes ago       Running             kube-apiserver            0                   9076ebf2b8037       kube-apiserver-addons-814968
	
	
	==> coredns [02903fa33ba6d3cfcad710302d0a6876157cbaea48453fe830b9ea47e1e6a08b] <==
	[INFO] 10.244.0.19:50862 - 27956 "AAAA IN hello-world-app.default.svc.cluster.local.c.k8s-minikube.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.005721557s
	[INFO] 10.244.0.19:50955 - 26053 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005581556s
	[INFO] 10.244.0.19:56500 - 29815 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005653731s
	[INFO] 10.244.0.19:50862 - 55586 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005464585s
	[INFO] 10.244.0.19:34478 - 51194 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.005697594s
	[INFO] 10.244.0.19:37328 - 14114 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006042931s
	[INFO] 10.244.0.19:59661 - 26056 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006219038s
	[INFO] 10.244.0.19:40238 - 29424 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006114108s
	[INFO] 10.244.0.19:49378 - 52616 "A IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00626021s
	[INFO] 10.244.0.19:40238 - 57739 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006061337s
	[INFO] 10.244.0.19:34478 - 41694 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006449902s
	[INFO] 10.244.0.19:50955 - 4742 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006908613s
	[INFO] 10.244.0.19:34478 - 36887 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.0000909s
	[INFO] 10.244.0.19:50862 - 6849 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006690823s
	[INFO] 10.244.0.19:56500 - 48859 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006971052s
	[INFO] 10.244.0.19:59661 - 64737 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006589803s
	[INFO] 10.244.0.19:49378 - 49791 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.006809081s
	[INFO] 10.244.0.19:50862 - 44121 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000098359s
	[INFO] 10.244.0.19:40238 - 61663 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000355736s
	[INFO] 10.244.0.19:50955 - 49779 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000204086s
	[INFO] 10.244.0.19:37328 - 10190 "AAAA IN hello-world-app.default.svc.cluster.local.google.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 75 0.00706982s
	[INFO] 10.244.0.19:49378 - 18080 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000104022s
	[INFO] 10.244.0.19:56500 - 53256 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00016724s
	[INFO] 10.244.0.19:59661 - 53258 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000093756s
	[INFO] 10.244.0.19:37328 - 8180 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062779s
	
	
	==> describe nodes <==
	Name:               addons-814968
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-814968
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=0f31bfe1a852f6cc79fedfeb2462ff6b6d86b5e4
	                    minikube.k8s.io/name=addons-814968
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_10_09T18_47_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-814968
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Oct 2024 18:46:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-814968
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Oct 2024 19:02:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Oct 2024 19:00:36 +0000   Wed, 09 Oct 2024 18:46:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Oct 2024 19:00:36 +0000   Wed, 09 Oct 2024 18:46:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Oct 2024 19:00:36 +0000   Wed, 09 Oct 2024 18:46:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Oct 2024 19:00:36 +0000   Wed, 09 Oct 2024 18:47:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-814968
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859316Ki
	  pods:               110
	System Info:
	  Machine ID:                 665ec1e43df44148875bede2afed5690
	  System UUID:                af1ce627-aaca-4c57-a0b5-20a11a6bd390
	  Boot ID:                    5492573a-87f0-4d18-a115-1fca0501652a
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-rxzq2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  kube-system                 coredns-7c65d6cfc9-dcfpw                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     15m
	  kube-system                 etcd-addons-814968                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         15m
	  kube-system                 kindnet-mdrqx                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-addons-814968             250m (3%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-addons-814968    200m (2%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-wprfw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-addons-814968             100m (1%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 metrics-server-84c5f94fbc-5gbfm          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         15m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 14m   kube-proxy       
	  Normal   Starting                 15m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  15m   kubelet          Node addons-814968 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15m   kubelet          Node addons-814968 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15m   kubelet          Node addons-814968 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m   node-controller  Node addons-814968 event: Registered Node addons-814968 in Controller
	  Normal   NodeReady                14m   kubelet          Node addons-814968 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000613] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.000629] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000637] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000641] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000684] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000634] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.615432] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.065032] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.027177] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +6.993095] kauditd_printk_skb: 44 callbacks suppressed
	[Oct 9 18:57] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[  +1.019618] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000032] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[  +2.019718] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[  +4.091682] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[Oct 9 18:58] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[ +16.122612] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	[ +34.045062] IPv4: martian source 10.244.0.19 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: d6 86 30 61 ea d3 0a f2 82 d1 8a 5a 08 00
	
	
	==> etcd [1fa69ee53f8ff640d4a04e850ee35f59a25d1e106284730f1dfe646abd9fbe38] <==
	{"level":"warn","ts":"2024-10-09T18:47:07.545055Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.679118ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-addons-814968\" ","response":"range_response_count:1 size:4488"}
	{"level":"info","ts":"2024-10-09T18:47:07.545296Z","caller":"traceutil/trace.go:171","msg":"trace[254952150] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-addons-814968; range_end:; response_count:1; response_revision:354; }","duration":"100.926406ms","start":"2024-10-09T18:47:07.444349Z","end":"2024-10-09T18:47:07.545275Z","steps":["trace[254952150] 'agreement among raft nodes before linearized reading'  (duration: 100.575133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:47:07.631074Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.939711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:47:07.631162Z","caller":"traceutil/trace.go:171","msg":"trace[1315990972] range","detail":"{range_begin:/registry/daemonsets/kube-system/nvidia-device-plugin-daemonset; range_end:; response_count:0; response_revision:354; }","duration":"187.028388ms","start":"2024-10-09T18:47:07.444101Z","end":"2024-10-09T18:47:07.631130Z","steps":["trace[1315990972] 'agreement among raft nodes before linearized reading'  (duration: 99.819321ms)","trace[1315990972] 'range keys from in-memory index tree'  (duration: 87.095682ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T18:47:08.149189Z","caller":"traceutil/trace.go:171","msg":"trace[36121968] transaction","detail":"{read_only:false; response_revision:361; number_of_response:1; }","duration":"115.963151ms","start":"2024-10-09T18:47:08.033209Z","end":"2024-10-09T18:47:08.149172Z","steps":["trace[36121968] 'process raft request'  (duration: 115.292249ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225266Z","caller":"traceutil/trace.go:171","msg":"trace[1679131097] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"101.228697ms","start":"2024-10-09T18:47:08.124022Z","end":"2024-10-09T18:47:08.225250Z","steps":["trace[1679131097] 'process raft request'  (duration: 101.19435ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225544Z","caller":"traceutil/trace.go:171","msg":"trace[1385913193] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"183.311089ms","start":"2024-10-09T18:47:08.042224Z","end":"2024-10-09T18:47:08.225535Z","steps":["trace[1385913193] 'process raft request'  (duration: 182.814039ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225700Z","caller":"traceutil/trace.go:171","msg":"trace[1097339318] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"183.219653ms","start":"2024-10-09T18:47:08.042474Z","end":"2024-10-09T18:47:08.225694Z","steps":["trace[1097339318] 'process raft request'  (duration: 182.651213ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225768Z","caller":"traceutil/trace.go:171","msg":"trace[1121631799] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"183.22097ms","start":"2024-10-09T18:47:08.042542Z","end":"2024-10-09T18:47:08.225763Z","steps":["trace[1121631799] 'process raft request'  (duration: 182.623437ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:08.225867Z","caller":"traceutil/trace.go:171","msg":"trace[751389749] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"183.116259ms","start":"2024-10-09T18:47:08.042744Z","end":"2024-10-09T18:47:08.225860Z","steps":["trace[751389749] 'process raft request'  (duration: 182.445575ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:47:09.035003Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.512009ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:47:09.035261Z","caller":"traceutil/trace.go:171","msg":"trace[982658497] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:387; }","duration":"102.77967ms","start":"2024-10-09T18:47:08.932467Z","end":"2024-10-09T18:47:09.035247Z","steps":["trace[982658497] 'agreement among raft nodes before linearized reading'  (duration: 102.496738ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:11.844374Z","caller":"traceutil/trace.go:171","msg":"trace[200416362] transaction","detail":"{read_only:false; response_revision:603; number_of_response:1; }","duration":"104.878761ms","start":"2024-10-09T18:47:11.739474Z","end":"2024-10-09T18:47:11.844353Z","steps":["trace[200416362] 'process raft request'  (duration: 104.520364ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:47:47.942604Z","caller":"traceutil/trace.go:171","msg":"trace[1471584267] linearizableReadLoop","detail":"{readStateIndex:1005; appliedIndex:1004; }","duration":"101.305859ms","start":"2024-10-09T18:47:47.841276Z","end":"2024-10-09T18:47:47.942582Z","steps":["trace[1471584267] 'read index received'  (duration: 37.289331ms)","trace[1471584267] 'applied index is now lower than readState.Index'  (duration: 64.015916ms)"],"step_count":2}
	{"level":"info","ts":"2024-10-09T18:47:47.942691Z","caller":"traceutil/trace.go:171","msg":"trace[1349289586] transaction","detail":"{read_only:false; response_revision:978; number_of_response:1; }","duration":"105.013231ms","start":"2024-10-09T18:47:47.837656Z","end":"2024-10-09T18:47:47.942669Z","steps":["trace[1349289586] 'process raft request'  (duration: 40.936293ms)","trace[1349289586] 'compare'  (duration: 63.914511ms)"],"step_count":2}
	{"level":"warn","ts":"2024-10-09T18:47:47.942715Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"101.416281ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-10-09T18:47:47.942743Z","caller":"traceutil/trace.go:171","msg":"trace[2022166272] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:978; }","duration":"101.466725ms","start":"2024-10-09T18:47:47.841269Z","end":"2024-10-09T18:47:47.942736Z","steps":["trace[2022166272] 'agreement among raft nodes before linearized reading'  (duration: 101.391084ms)"],"step_count":1}
	{"level":"warn","ts":"2024-10-09T18:47:58.265012Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.891286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/addons-814968\" ","response":"range_response_count:1 size:6238"}
	{"level":"info","ts":"2024-10-09T18:47:58.265079Z","caller":"traceutil/trace.go:171","msg":"trace[1066582241] range","detail":"{range_begin:/registry/minions/addons-814968; range_end:; response_count:1; response_revision:1038; }","duration":"110.969136ms","start":"2024-10-09T18:47:58.154097Z","end":"2024-10-09T18:47:58.265066Z","steps":["trace[1066582241] 'range keys from in-memory index tree'  (duration: 110.738887ms)"],"step_count":1}
	{"level":"info","ts":"2024-10-09T18:56:57.155879Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1514}
	{"level":"info","ts":"2024-10-09T18:56:57.179084Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1514,"took":"22.769077ms","hash":3916989673,"current-db-size-bytes":6021120,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":3117056,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2024-10-09T18:56:57.179141Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3916989673,"revision":1514,"compact-revision":-1}
	{"level":"info","ts":"2024-10-09T19:01:57.160529Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1930}
	{"level":"info","ts":"2024-10-09T19:01:57.176727Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1930,"took":"15.706646ms","hash":3563825899,"current-db-size-bytes":6021120,"current-db-size":"6.0 MB","current-db-size-in-use-bytes":5111808,"current-db-size-in-use":"5.1 MB"}
	{"level":"info","ts":"2024-10-09T19:01:57.176777Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3563825899,"revision":1930,"compact-revision":1514}
	
	
	==> kernel <==
	 19:02:10 up 44 min,  0 users,  load average: 0.09, 0.49, 0.47
	Linux addons-814968 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f2f6ada66ed9163b87179ad3082fc3f7b8897658ed1e8a9ee86fb7b0ffd5c67c] <==
	I1009 19:00:04.524200       1 main.go:300] handling current node
	I1009 19:00:14.524189       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:00:14.524223       1 main.go:300] handling current node
	I1009 19:00:24.524357       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:00:24.524417       1 main.go:300] handling current node
	I1009 19:00:34.531281       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:00:34.531325       1 main.go:300] handling current node
	I1009 19:00:44.527306       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:00:44.527354       1 main.go:300] handling current node
	I1009 19:00:54.532644       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:00:54.532688       1 main.go:300] handling current node
	I1009 19:01:04.524691       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:01:04.524734       1 main.go:300] handling current node
	I1009 19:01:14.524576       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:01:14.524616       1 main.go:300] handling current node
	I1009 19:01:24.527271       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:01:24.527326       1 main.go:300] handling current node
	I1009 19:01:34.530201       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:01:34.530242       1 main.go:300] handling current node
	I1009 19:01:44.527277       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:01:44.527318       1 main.go:300] handling current node
	I1009 19:01:54.527258       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:01:54.527293       1 main.go:300] handling current node
	I1009 19:02:04.523782       1 main.go:296] Handling node with IPs: map[192.168.49.2:{}]
	I1009 19:02:04.523839       1 main.go:300] handling current node
	
	
	==> kube-apiserver [16933cbf0d80208494e9bb3cb9830fd168761741d4d4e7647b2b55ed3eb43c8c] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1009 18:48:34.966248       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.125:443: connect: connection refused" logger="UnhandledError"
	E1009 18:48:34.967635       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.125:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.125:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.125:443: connect: connection refused" logger="UnhandledError"
	I1009 18:48:35.001203       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1009 18:57:19.958662       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.107.55.9"}
	I1009 18:57:37.053068       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1009 18:57:37.230770       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.217.65"}
	I1009 18:57:39.760775       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1009 18:57:40.831110       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1009 18:57:50.618911       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1009 18:58:10.205575       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.205734       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:58:10.219122       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.219341       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:58:10.219439       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.232676       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.232824       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1009 18:58:10.242429       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1009 18:58:10.242751       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1009 18:58:11.223635       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1009 18:58:11.243020       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1009 18:58:11.251642       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1009 18:58:29.000407       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1009 18:59:58.847826       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.111.224.125"}
	
	
	==> kube-controller-manager [6851332d0dffcb1290144bb0978b1aae3c11a2c819bc2626e8621211c70ff867] <==
	E1009 19:00:09.949919       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 19:00:13.323468       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W1009 19:00:15.786135       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:15.786184       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:30.621657       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:30.621713       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1009 19:00:36.913684       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-814968"
	W1009 19:00:44.717497       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:44.717546       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:00:45.916152       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:00:45.916191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:01:11.791390       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:01:11.791443       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:01:12.668046       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:01:12.668087       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:01:29.683092       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:01:29.683130       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:01:43.537262       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:01:43.537306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:03.510668       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:03.510712       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:04.332648       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:04.332692       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1009 19:02:04.404706       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1009 19:02:04.404744       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [2ecd337cc588bb369774c4b52c5bf4512b7380cb6e1c083dfd205b222052bfb1] <==
	I1009 18:47:09.633707       1 server_linux.go:66] "Using iptables proxy"
	I1009 18:47:10.538221       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1009 18:47:10.538329       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1009 18:47:10.729409       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1009 18:47:10.729496       1 server_linux.go:169] "Using iptables Proxier"
	I1009 18:47:10.733444       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1009 18:47:10.734161       1 server.go:483] "Version info" version="v1.31.1"
	I1009 18:47:10.734191       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1009 18:47:10.736403       1 config.go:199] "Starting service config controller"
	I1009 18:47:10.736502       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1009 18:47:10.736578       1 config.go:105] "Starting endpoint slice config controller"
	I1009 18:47:10.740861       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1009 18:47:10.736628       1 config.go:328] "Starting node config controller"
	I1009 18:47:10.740890       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1009 18:47:10.930430       1 shared_informer.go:320] Caches are synced for node config
	I1009 18:47:10.930543       1 shared_informer.go:320] Caches are synced for service config
	I1009 18:47:10.930482       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [221dded81f0de01ca060a812c2b039396973be8cb1b615537a1dd3baf3739915] <==
	E1009 18:46:58.244832       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1009 18:46:58.244829       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.089011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1009 18:46:59.089059       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.098410       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1009 18:46:59.098445       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.138894       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1009 18:46:59.138936       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.174617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1009 18:46:59.174662       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.180011       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1009 18:46:59.180023       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1009 18:46:59.180051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E1009 18:46:59.180051       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.245248       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1009 18:46:59.245287       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.290527       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1009 18:46:59.290579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.372229       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1009 18:46:59.372274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.389743       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1009 18:46:59.389793       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1009 18:46:59.399276       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1009 18:46:59.399320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1009 18:46:59.641759       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 09 19:00:30 addons-814968 kubelet[1623]: E1009 19:00:30.609108    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500430608802554,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:00:40 addons-814968 kubelet[1623]: E1009 19:00:40.611685    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500440611424838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:00:40 addons-814968 kubelet[1623]: E1009 19:00:40.611724    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500440611424838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:00:50 addons-814968 kubelet[1623]: E1009 19:00:50.614486    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500450614186979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:00:50 addons-814968 kubelet[1623]: E1009 19:00:50.614525    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500450614186979,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:00 addons-814968 kubelet[1623]: E1009 19:01:00.617394    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500460617099273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:00 addons-814968 kubelet[1623]: E1009 19:01:00.617429    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500460617099273,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:00 addons-814968 kubelet[1623]: I1009 19:01:00.641544    1623 scope.go:117] "RemoveContainer" containerID="cf7b6540335cc74d669e696fae8fa539d667338a5e5d2bbe39f7995c8927426a"
	Oct 09 19:01:00 addons-814968 kubelet[1623]: I1009 19:01:00.655846    1623 scope.go:117] "RemoveContainer" containerID="586088eebfd1663d705834381461506296a52b567f7a74cf3386741b7495bb78"
	Oct 09 19:01:10 addons-814968 kubelet[1623]: E1009 19:01:10.619737    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500470619443464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:10 addons-814968 kubelet[1623]: E1009 19:01:10.619777    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500470619443464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:20 addons-814968 kubelet[1623]: E1009 19:01:20.621695    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500480621433472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:20 addons-814968 kubelet[1623]: E1009 19:01:20.621726    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500480621433472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:30 addons-814968 kubelet[1623]: E1009 19:01:30.624246    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500490624013040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:30 addons-814968 kubelet[1623]: E1009 19:01:30.624279    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500490624013040,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:40 addons-814968 kubelet[1623]: E1009 19:01:40.626894    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500500626607480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:40 addons-814968 kubelet[1623]: E1009 19:01:40.626933    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500500626607480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:50 addons-814968 kubelet[1623]: I1009 19:01:50.337150    1623 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Oct 09 19:01:50 addons-814968 kubelet[1623]: E1009 19:01:50.629390    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500510629104527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:01:50 addons-814968 kubelet[1623]: E1009 19:01:50.629425    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500510629104527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:00 addons-814968 kubelet[1623]: E1009 19:02:00.357663    1623 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057, memory: /docker/1cffd86fbfa3a19811ce9735ea408f354dd5b50821a9dbeddc1a98d29ea08057/system.slice/kubelet.service"
	Oct 09 19:02:00 addons-814968 kubelet[1623]: E1009 19:02:00.632007    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500520631770462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:00 addons-814968 kubelet[1623]: E1009 19:02:00.632043    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500520631770462,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:10 addons-814968 kubelet[1623]: E1009 19:02:10.634144    1623 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500530633876053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Oct 09 19:02:10 addons-814968 kubelet[1623]: E1009 19:02:10.634193    1623 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1728500530633876053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604826,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [8caeb8fad85ec95c5166c64c88db374ab53bae2b4b1c9d62f3e98a0c1445a981] <==
	I1009 18:47:25.571754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1009 18:47:25.580483       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1009 18:47:25.580515       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1009 18:47:25.631459       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1009 18:47:25.631589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"5ac3e4d7-32fd-45bb-9f1c-61752b666082", APIVersion:"v1", ResourceVersion:"875", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-814968_ab2696c8-e483-426a-9a4f-d5167d195767 became leader
	I1009 18:47:25.631700       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-814968_ab2696c8-e483-426a-9a4f-d5167d195767!
	I1009 18:47:25.732640       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-814968_ab2696c8-e483-426a-9a4f-d5167d195767!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-814968 -n addons-814968
helpers_test.go:261: (dbg) Run:  kubectl --context addons-814968 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (292.51s)

                                                
                                    

Test pass (300/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.23
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 3.43
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.06
21 TestBinaryMirror 0.76
22 TestOffline 59.21
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 164.26
31 TestAddons/serial/GCPAuth/Namespaces 0.14
34 TestAddons/parallel/Registry 13.82
36 TestAddons/parallel/InspektorGadget 12.02
39 TestAddons/parallel/CSI 57.71
40 TestAddons/parallel/Headlamp 17.54
41 TestAddons/parallel/CloudSpanner 5.49
42 TestAddons/parallel/LocalPath 53.8
43 TestAddons/parallel/NvidiaDevicePlugin 5.47
44 TestAddons/parallel/Yakd 11.67
45 TestAddons/StoppedEnableDisable 12.06
46 TestCertOptions 27.66
47 TestCertExpiration 228.86
49 TestForceSystemdFlag 26.78
50 TestForceSystemdEnv 32.44
52 TestKVMDriverInstallOrUpdate 3.34
56 TestErrorSpam/setup 20.99
57 TestErrorSpam/start 0.59
58 TestErrorSpam/status 0.89
59 TestErrorSpam/pause 1.59
60 TestErrorSpam/unpause 1.59
61 TestErrorSpam/stop 1.36
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 41.13
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 26.86
68 TestFunctional/serial/KubeContext 0.04
69 TestFunctional/serial/KubectlGetPods 0.08
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.13
73 TestFunctional/serial/CacheCmd/cache/add_local 1.39
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.11
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
81 TestFunctional/serial/ExtraConfig 39.18
82 TestFunctional/serial/ComponentHealth 0.07
83 TestFunctional/serial/LogsCmd 1.4
84 TestFunctional/serial/LogsFileCmd 1.42
85 TestFunctional/serial/InvalidService 3.94
87 TestFunctional/parallel/ConfigCmd 0.4
88 TestFunctional/parallel/DashboardCmd 8.3
89 TestFunctional/parallel/DryRun 0.41
90 TestFunctional/parallel/InternationalLanguage 0.18
91 TestFunctional/parallel/StatusCmd 1.03
95 TestFunctional/parallel/ServiceCmdConnect 10.71
96 TestFunctional/parallel/AddonsCmd 0.15
97 TestFunctional/parallel/PersistentVolumeClaim 29.75
99 TestFunctional/parallel/SSHCmd 0.65
100 TestFunctional/parallel/CpCmd 1.89
101 TestFunctional/parallel/MySQL 19.91
102 TestFunctional/parallel/FileSync 0.29
103 TestFunctional/parallel/CertSync 1.8
107 TestFunctional/parallel/NodeLabels 0.07
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.59
111 TestFunctional/parallel/License 0.18
112 TestFunctional/parallel/ServiceCmd/DeployApp 9.23
114 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.45
115 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.27
118 TestFunctional/parallel/ServiceCmd/List 0.51
119 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
120 TestFunctional/parallel/ServiceCmd/HTTPS 0.56
121 TestFunctional/parallel/ServiceCmd/Format 0.44
122 TestFunctional/parallel/ServiceCmd/URL 0.38
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
130 TestFunctional/parallel/MountCmd/any-port 5.65
131 TestFunctional/parallel/ProfileCmd/profile_list 0.45
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
133 TestFunctional/parallel/Version/short 0.06
134 TestFunctional/parallel/Version/components 0.52
135 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
138 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
140 TestFunctional/parallel/ImageCommands/Setup 1
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.42
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.49
144 TestFunctional/parallel/MountCmd/specific-port 1.69
145 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.5
146 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.56
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.72
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 100.59
160 TestMultiControlPlane/serial/DeployApp 6.05
161 TestMultiControlPlane/serial/PingHostFromPods 1.07
162 TestMultiControlPlane/serial/AddWorkerNode 35.73
163 TestMultiControlPlane/serial/NodeLabels 0.06
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
165 TestMultiControlPlane/serial/CopyFile 16.13
166 TestMultiControlPlane/serial/StopSecondaryNode 12.5
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
168 TestMultiControlPlane/serial/RestartSecondaryNode 20.93
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.08
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 198.22
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.31
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
173 TestMultiControlPlane/serial/StopCluster 35.54
174 TestMultiControlPlane/serial/RestartCluster 81.37
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
176 TestMultiControlPlane/serial/AddSecondaryNode 37.84
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
181 TestJSONOutput/start/Command 42.64
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.66
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.6
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.73
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 26.59
207 TestKicCustomNetwork/use_default_bridge_network 23.46
208 TestKicExistingNetwork 24.94
209 TestKicCustomSubnet 24.32
210 TestKicStaticIP 27.38
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 49.02
215 TestMountStart/serial/StartWithMountFirst 8.4
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 5.4
218 TestMountStart/serial/VerifyMountSecond 0.25
219 TestMountStart/serial/DeleteFirst 1.62
220 TestMountStart/serial/VerifyMountPostDelete 0.25
221 TestMountStart/serial/Stop 1.18
222 TestMountStart/serial/RestartStopped 7.21
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 73.43
227 TestMultiNode/serial/DeployApp2Nodes 5.52
228 TestMultiNode/serial/PingHostFrom2Pods 0.72
229 TestMultiNode/serial/AddNode 28.33
230 TestMultiNode/serial/MultiNodeLabels 0.06
231 TestMultiNode/serial/ProfileList 0.64
232 TestMultiNode/serial/CopyFile 9.23
233 TestMultiNode/serial/StopNode 2.13
234 TestMultiNode/serial/StartAfterStop 9.15
235 TestMultiNode/serial/RestartKeepsNodes 113.1
236 TestMultiNode/serial/DeleteNode 5.3
237 TestMultiNode/serial/StopMultiNode 23.7
238 TestMultiNode/serial/RestartMultiNode 56.74
239 TestMultiNode/serial/ValidateNameConflict 25.53
244 TestPreload 102.13
246 TestScheduledStopUnix 98.73
249 TestInsufficientStorage 12.4
250 TestRunningBinaryUpgrade 68.42
252 TestKubernetesUpgrade 360.02
253 TestMissingContainerUpgrade 131.52
258 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 35.29
264 TestNetworkPlugins/group/false 7.77
268 TestNoKubernetes/serial/StartWithStopK8s 8.93
269 TestNoKubernetes/serial/Start 12.62
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
271 TestNoKubernetes/serial/ProfileList 11.72
272 TestNoKubernetes/serial/Stop 1.84
273 TestNoKubernetes/serial/StartNoArgs 8.2
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
275 TestStoppedBinaryUpgrade/Setup 0.55
276 TestStoppedBinaryUpgrade/Upgrade 55.04
277 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
286 TestPause/serial/Start 44.58
287 TestNetworkPlugins/group/auto/Start 41.46
288 TestPause/serial/SecondStartNoReconfiguration 21.05
289 TestNetworkPlugins/group/auto/KubeletFlags 0.28
290 TestNetworkPlugins/group/auto/NetCatPod 10.2
291 TestPause/serial/Pause 0.71
292 TestPause/serial/VerifyStatus 0.3
293 TestPause/serial/Unpause 0.66
294 TestPause/serial/PauseAgain 0.83
295 TestPause/serial/DeletePaused 2.63
296 TestPause/serial/VerifyDeletedResources 0.78
297 TestNetworkPlugins/group/auto/DNS 0.19
298 TestNetworkPlugins/group/auto/Localhost 0.19
299 TestNetworkPlugins/group/auto/HairPin 0.14
300 TestNetworkPlugins/group/kindnet/Start 45.52
301 TestNetworkPlugins/group/flannel/Start 47.59
302 TestNetworkPlugins/group/enable-default-cni/Start 70.01
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.26
305 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
306 TestNetworkPlugins/group/flannel/ControllerPod 6.01
307 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
308 TestNetworkPlugins/group/flannel/NetCatPod 9.23
309 TestNetworkPlugins/group/kindnet/DNS 0.13
310 TestNetworkPlugins/group/kindnet/Localhost 0.13
311 TestNetworkPlugins/group/kindnet/HairPin 0.11
312 TestNetworkPlugins/group/flannel/DNS 0.13
313 TestNetworkPlugins/group/flannel/Localhost 0.11
314 TestNetworkPlugins/group/flannel/HairPin 0.11
315 TestNetworkPlugins/group/bridge/Start 68.79
316 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
317 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.19
318 TestNetworkPlugins/group/custom-flannel/Start 49.02
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
321 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
322 TestNetworkPlugins/group/calico/Start 55.47
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
325 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
326 TestNetworkPlugins/group/bridge/NetCatPod 10.23
327 TestNetworkPlugins/group/custom-flannel/DNS 0.18
328 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
329 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
331 TestStartStop/group/old-k8s-version/serial/FirstStart 131.67
332 TestNetworkPlugins/group/bridge/DNS 0.23
333 TestNetworkPlugins/group/bridge/Localhost 0.12
334 TestNetworkPlugins/group/bridge/HairPin 0.15
336 TestStartStop/group/no-preload/serial/FirstStart 59.4
337 TestNetworkPlugins/group/calico/ControllerPod 6.01
339 TestStartStop/group/embed-certs/serial/FirstStart 52.55
340 TestNetworkPlugins/group/calico/KubeletFlags 0.36
341 TestNetworkPlugins/group/calico/NetCatPod 11.68
342 TestNetworkPlugins/group/calico/DNS 0.14
343 TestNetworkPlugins/group/calico/Localhost 0.14
344 TestNetworkPlugins/group/calico/HairPin 0.11
346 TestStartStop/group/newest-cni/serial/FirstStart 25.53
347 TestStartStop/group/embed-certs/serial/DeployApp 8.24
348 TestStartStop/group/no-preload/serial/DeployApp 7.29
349 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
350 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
351 TestStartStop/group/embed-certs/serial/Stop 11.98
352 TestStartStop/group/no-preload/serial/Stop 11.95
353 TestStartStop/group/newest-cni/serial/DeployApp 0
354 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.77
355 TestStartStop/group/newest-cni/serial/Stop 1.19
356 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
357 TestStartStop/group/newest-cni/serial/SecondStart 13.12
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
359 TestStartStop/group/embed-certs/serial/SecondStart 264.06
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
361 TestStartStop/group/no-preload/serial/SecondStart 263.26
362 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
363 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
365 TestStartStop/group/newest-cni/serial/Pause 3.7
367 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.18
368 TestStartStop/group/old-k8s-version/serial/DeployApp 8.4
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.83
370 TestStartStop/group/old-k8s-version/serial/Stop 12.04
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
372 TestStartStop/group/old-k8s-version/serial/SecondStart 143.76
373 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.32
374 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.9
375 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.21
376 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
377 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 274.51
378 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
379 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
380 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
381 TestStartStop/group/old-k8s-version/serial/Pause 2.57
382 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
383 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
384 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
385 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
387 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
388 TestStartStop/group/no-preload/serial/Pause 2.83
389 TestStartStop/group/embed-certs/serial/Pause 2.83
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.56
x
+
TestDownloadOnly/v1.20.0/json-events (6.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-543737 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-543737 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.230402248s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1009 18:46:17.229469   15983 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1009 18:46:17.229569   15983 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-543737
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-543737: exit status 85 (61.939175ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-543737 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |          |
	|         | -p download-only-543737        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:11
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:11.041190   15995 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:11.041310   15995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:11.041318   15995 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:11.041323   15995 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:11.041539   15995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	W1009 18:46:11.041664   15995 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19780-9209/.minikube/config/config.json: open /home/jenkins/minikube-integration/19780-9209/.minikube/config/config.json: no such file or directory
	I1009 18:46:11.042220   15995 out.go:352] Setting JSON to true
	I1009 18:46:11.043110   15995 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1722,"bootTime":1728497849,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:46:11.043244   15995 start.go:139] virtualization: kvm guest
	I1009 18:46:11.045751   15995 out.go:97] [download-only-543737] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1009 18:46:11.045874   15995 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball: no such file or directory
	I1009 18:46:11.045923   15995 notify.go:220] Checking for updates...
	I1009 18:46:11.047237   15995 out.go:169] MINIKUBE_LOCATION=19780
	I1009 18:46:11.048747   15995 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:11.050022   15995 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 18:46:11.051316   15995 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	I1009 18:46:11.052610   15995 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 18:46:11.055073   15995 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:46:11.055285   15995 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:11.077549   15995 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:11.077650   15995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:11.448201   15995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-10-09 18:46:11.438511669 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:46:11.448307   15995 docker.go:318] overlay module found
	I1009 18:46:11.450024   15995 out.go:97] Using the docker driver based on user configuration
	I1009 18:46:11.450052   15995 start.go:297] selected driver: docker
	I1009 18:46:11.450058   15995 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:11.450136   15995 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:11.501690   15995 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-10-09 18:46:11.490821545 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:46:11.501882   15995 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:11.502542   15995 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1009 18:46:11.502704   15995 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:46:11.504460   15995 out.go:169] Using Docker driver with root privileges
	I1009 18:46:11.505737   15995 cni.go:84] Creating CNI manager for ""
	I1009 18:46:11.505803   15995 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1009 18:46:11.505816   15995 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1009 18:46:11.505893   15995 start.go:340] cluster config:
	{Name:download-only-543737 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-543737 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 18:46:11.507316   15995 out.go:97] Starting "download-only-543737" primary control-plane node in "download-only-543737" cluster
	I1009 18:46:11.507348   15995 cache.go:121] Beginning downloading kic base image for docker with crio
	I1009 18:46:11.508515   15995 out.go:97] Pulling base image v0.0.45-1728382586-19774 ...
	I1009 18:46:11.508536   15995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 18:46:11.508585   15995 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local docker daemon
	I1009 18:46:11.524103   15995 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:11.524285   15995 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec in local cache directory
	I1009 18:46:11.524375   15995 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec to local cache
	I1009 18:46:11.539230   15995 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:46:11.539254   15995 cache.go:56] Caching tarball of preloaded images
	I1009 18:46:11.539413   15995 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1009 18:46:11.541235   15995 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1009 18:46:11.541251   15995 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1009 18:46:11.569018   15995 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1009 18:46:14.753535   15995 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec as a tarball
	I1009 18:46:15.734476   15995 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1009 18:46:15.734594   15995 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-543737 host does not exist
	  To start a cluster, run: "minikube start -p download-only-543737"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-543737
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (3.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-509071 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-509071 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.426232971s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (3.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I1009 18:46:21.059179   15983 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I1009 18:46:21.059248   15983 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19780-9209/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-509071
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-509071: exit status 85 (60.202715ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-543737 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-543737        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| delete  | -p download-only-543737        | download-only-543737 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC | 09 Oct 24 18:46 UTC |
	| start   | -o=json --download-only        | download-only-509071 | jenkins | v1.34.0 | 09 Oct 24 18:46 UTC |                     |
	|         | -p download-only-509071        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/10/09 18:46:17
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1009 18:46:17.673254   16346 out.go:345] Setting OutFile to fd 1 ...
	I1009 18:46:17.673386   16346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:17.673395   16346 out.go:358] Setting ErrFile to fd 2...
	I1009 18:46:17.673403   16346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 18:46:17.673618   16346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 18:46:17.674194   16346 out.go:352] Setting JSON to true
	I1009 18:46:17.675018   16346 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":1729,"bootTime":1728497849,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 18:46:17.675112   16346 start.go:139] virtualization: kvm guest
	I1009 18:46:17.677531   16346 out.go:97] [download-only-509071] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 18:46:17.677696   16346 notify.go:220] Checking for updates...
	I1009 18:46:17.679135   16346 out.go:169] MINIKUBE_LOCATION=19780
	I1009 18:46:17.680557   16346 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 18:46:17.682069   16346 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 18:46:17.683407   16346 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	I1009 18:46:17.684791   16346 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1009 18:46:17.687142   16346 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1009 18:46:17.687378   16346 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 18:46:17.710416   16346 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 18:46:17.710494   16346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:17.757888   16346 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-09 18:46:17.748890206 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:46:17.757987   16346 docker.go:318] overlay module found
	I1009 18:46:17.759682   16346 out.go:97] Using the docker driver based on user configuration
	I1009 18:46:17.759706   16346 start.go:297] selected driver: docker
	I1009 18:46:17.759711   16346 start.go:901] validating driver "docker" against <nil>
	I1009 18:46:17.759787   16346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 18:46:17.803449   16346 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-10-09 18:46:17.79499104 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bri
dge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 18:46:17.803603   16346 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1009 18:46:17.804100   16346 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1009 18:46:17.804238   16346 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1009 18:46:17.805980   16346 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-509071 host does not exist
	  To start a cluster, run: "minikube start -p download-only-509071"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-509071
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.06s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-242838 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-242838" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-242838
--- PASS: TestDownloadOnlyKic (1.06s)

                                                
                                    
x
+
TestBinaryMirror (0.76s)

                                                
                                                
=== RUN   TestBinaryMirror
I1009 18:46:22.768690   15983 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-233255 --alsologtostderr --binary-mirror http://127.0.0.1:45383 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-233255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-233255
--- PASS: TestBinaryMirror (0.76s)

                                                
                                    
x
+
TestOffline (59.21s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-980106 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-980106 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (56.143500805s)
helpers_test.go:175: Cleaning up "offline-crio-980106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-980106
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-980106: (3.062748279s)
--- PASS: TestOffline (59.21s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:935: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-814968
addons_test.go:935: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-814968: exit status 85 (57.410007ms)

                                                
                                                
-- stdout --
	* Profile "addons-814968" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-814968"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:946: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-814968
addons_test.go:946: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-814968: exit status 85 (56.118752ms)

                                                
                                                
-- stdout --
	* Profile "addons-814968" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-814968"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (164.26s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-814968 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-814968 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m44.256514162s)
--- PASS: TestAddons/Setup (164.26s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-814968 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-814968 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.728902ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-s2zbn" [e5e37670-4f6a-48d7-8ec0-96a1df679765] Running
I1009 18:57:19.253872   15983 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1009 18:57:19.253896   15983 kapi.go:107] duration metric: took 5.801424ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003381867s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zpr6p" [1a3e151b-470d-420f-a50b-d42194bf9620] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003428106s
addons_test.go:331: (dbg) Run:  kubectl --context addons-814968 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-814968 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-814968 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.029971373s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 ip
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (13.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jld8l" [e8830064-0eb1-4a89-b97f-f00bb502da05] Running
addons_test.go:758: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004109777s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 addons disable inspektor-gadget --alsologtostderr -v=1: (6.014141532s)
--- PASS: TestAddons/parallel/InspektorGadget (12.02s)

                                                
                                    
x
+
TestAddons/parallel/CSI (57.71s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1009 18:57:19.248108   15983 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 5.816282ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-814968 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/10/09 18:57:32 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-814968 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [369004bb-d7f6-4ff4-be79-992054e7613a] Pending
helpers_test.go:344: "task-pv-pod" [369004bb-d7f6-4ff4-be79-992054e7613a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [369004bb-d7f6-4ff4-be79-992054e7613a] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.00354369s
addons_test.go:511: (dbg) Run:  kubectl --context addons-814968 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-814968 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-814968 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-814968 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-814968 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-814968 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-814968 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [97305f73-1bde-484b-ad3d-af4ce402a2fc] Pending
helpers_test.go:344: "task-pv-pod-restore" [97305f73-1bde-484b-ad3d-af4ce402a2fc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [97305f73-1bde-484b-ad3d-af4ce402a2fc] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003269727s
addons_test.go:553: (dbg) Run:  kubectl --context addons-814968 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-814968 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-814968 delete volumesnapshot new-snapshot-demo
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.564519518s)
--- PASS: TestAddons/parallel/CSI (57.71s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-814968 --alsologtostderr -v=1
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-k5kkq" [a717ca9a-3a68-46a2-98ea-a2f02ecb243f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-k5kkq" [a717ca9a-3a68-46a2-98ea-a2f02ecb243f] Running
addons_test.go:748: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004314046s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable headlamp --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 addons disable headlamp --alsologtostderr -v=1: (5.779417727s)
--- PASS: TestAddons/parallel/Headlamp (17.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-626ks" [e647b19e-2eea-4da4-8d02-50bd3ea1eea4] Running
addons_test.go:775: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003734317s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.49s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.8s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:884: (dbg) Run:  kubectl --context addons-814968 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:890: (dbg) Run:  kubectl --context addons-814968 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:894: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-814968 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [062d8174-d9dd-4d60-b8d6-6929f087b4f9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [062d8174-d9dd-4d60-b8d6-6929f087b4f9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [062d8174-d9dd-4d60-b8d6-6929f087b4f9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:897: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003994713s
addons_test.go:902: (dbg) Run:  kubectl --context addons-814968 get pvc test-pvc -o=json
addons_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 ssh "cat /opt/local-path-provisioner/pvc-0ee2d6e6-4e3a-44c5-8adf-db1e9e8041de_default_test-pvc/file1"
addons_test.go:923: (dbg) Run:  kubectl --context addons-814968 delete pod test-local-path
addons_test.go:927: (dbg) Run:  kubectl --context addons-814968 delete pvc test-pvc
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.950408896s)
--- PASS: TestAddons/parallel/LocalPath (53.80s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7txf4" [91c3baad-6ee1-4595-bce6-7b2db5cb9cd3] Running
addons_test.go:960: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004233918s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.47s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-nh9dn" [a70c0473-9441-4e6a-ab81-59121fb24fb3] Running
addons_test.go:969: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003012587s
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable yakd --alsologtostderr -v=1
addons_test.go:975: (dbg) Done: out/minikube-linux-amd64 -p addons-814968 addons disable yakd --alsologtostderr -v=1: (5.670059111s)
--- PASS: TestAddons/parallel/Yakd (11.67s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.06s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-814968
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-814968: (11.802079321s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-814968
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-814968
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-814968
--- PASS: TestAddons/StoppedEnableDisable (12.06s)

                                                
                                    
x
+
TestCertOptions (27.66s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-266247 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-266247 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.646868952s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-266247 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-266247 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-266247 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-266247" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-266247
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-266247: (4.346135719s)
--- PASS: TestCertOptions (27.66s)

                                                
                                    
x
+
TestCertExpiration (228.86s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-704906 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-704906 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.907802418s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-704906 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-704906 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (17.283724391s)
helpers_test.go:175: Cleaning up "cert-expiration-704906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-704906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-704906: (2.664311152s)
--- PASS: TestCertExpiration (228.86s)

                                                
                                    
x
+
TestForceSystemdFlag (26.78s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-917716 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-917716 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (24.125033234s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-917716 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-917716" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-917716
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-917716: (2.369538291s)
--- PASS: TestForceSystemdFlag (26.78s)

                                                
                                    
x
+
TestForceSystemdEnv (32.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-007654 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-007654 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.862809233s)
helpers_test.go:175: Cleaning up "force-systemd-env-007654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-007654
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-007654: (2.575768637s)
--- PASS: TestForceSystemdEnv (32.44s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.34s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1009 19:32:35.814069   15983 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 19:32:35.814258   15983 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1009 19:32:35.848794   15983 install.go:62] docker-machine-driver-kvm2: exit status 1
W1009 19:32:35.849227   15983 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1009 19:32:35.849317   15983 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3498112616/001/docker-machine-driver-kvm2
I1009 19:32:36.134576   15983 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3498112616/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80] Decompressors:map[bz2:0xc000811240 gz:0xc000811248 tar:0xc0008111e0 tar.bz2:0xc000811200 tar.gz:0xc000811210 tar.xz:0xc000811220 tar.zst:0xc000811230 tbz2:0xc000811200 tgz:0xc000811210 txz:0xc000811220 tzst:0xc000811230 xz:0xc000811250 zip:0xc000811260 zst:0xc000811258] Getters:map[file:0xc001958ba0 http:0xc0019601e0 https:0xc001960230] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1009 19:32:36.134774   15983 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3498112616/001/docker-machine-driver-kvm2
I1009 19:32:37.710655   15983 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1009 19:32:37.710743   15983 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1009 19:32:37.741640   15983 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1009 19:32:37.741679   15983 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1009 19:32:37.741738   15983 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1009 19:32:37.741770   15983 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3498112616/002/docker-machine-driver-kvm2
I1009 19:32:37.907464   15983 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3498112616/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80 0x52f3c80] Decompressors:map[bz2:0xc000811240 gz:0xc000811248 tar:0xc0008111e0 tar.bz2:0xc000811200 tar.gz:0xc000811210 tar.xz:0xc000811220 tar.zst:0xc000811230 tbz2:0xc000811200 tgz:0xc000811210 txz:0xc000811220 tzst:0xc000811230 xz:0xc000811250 zip:0xc000811260 zst:0xc000811258] Getters:map[file:0xc001b3d330 http:0xc0018c10e0 https:0xc0018c1130] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1009 19:32:37.907509   15983 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3498112616/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.34s)

                                                
                                    
x
+
TestErrorSpam/setup (20.99s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-233272 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-233272 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-233272 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-233272 --driver=docker  --container-runtime=crio: (20.993044739s)
--- PASS: TestErrorSpam/setup (20.99s)

                                                
                                    
x
+
TestErrorSpam/start (0.59s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 start --dry-run
--- PASS: TestErrorSpam/start (0.59s)

                                                
                                    
x
+
TestErrorSpam/status (0.89s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 status
--- PASS: TestErrorSpam/status (0.89s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 unpause
--- PASS: TestErrorSpam/unpause (1.59s)

                                                
                                    
x
+
TestErrorSpam/stop (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 stop: (1.177696949s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-233272 --log_dir /tmp/nospam-233272 stop
--- PASS: TestErrorSpam/stop (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19780-9209/.minikube/files/etc/test/nested/copy/15983/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-275165 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-275165 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.129284983s)
--- PASS: TestFunctional/serial/StartWithProxy (41.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.86s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1009 19:03:43.393486   15983 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-275165 --alsologtostderr -v=8
E1009 19:04:08.442832   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:08.449246   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:08.460616   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:08.482049   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:08.523489   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:08.605048   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:08.766602   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:09.088820   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:09.731067   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-275165 --alsologtostderr -v=8: (26.860546179s)
functional_test.go:663: soft start took 26.861689412s for "functional-275165" cluster.
I1009 19:04:10.254502   15983 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (26.86s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-275165 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cache add registry.k8s.io/pause:3.1
E1009 19:04:11.012801   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-275165 cache add registry.k8s.io/pause:3.3: (1.170594075s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-275165 /tmp/TestFunctionalserialCacheCmdcacheadd_local638492901/001
E1009 19:04:13.575108   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cache add minikube-local-cache-test:functional-275165
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-275165 cache add minikube-local-cache-test:functional-275165: (1.034790397s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cache delete minikube-local-cache-test:functional-275165
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-275165
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (283.881655ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 kubectl -- --context functional-275165 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-275165 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-275165 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1009 19:04:18.697012   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:28.939045   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:04:49.421022   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-275165 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.180709181s)
functional_test.go:761: restart took 39.180836473s for "functional-275165" cluster.
I1009 19:04:56.523903   15983 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (39.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-275165 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-275165 logs: (1.404359412s)
--- PASS: TestFunctional/serial/LogsCmd (1.40s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 logs --file /tmp/TestFunctionalserialLogsFileCmd831450969/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-275165 logs --file /tmp/TestFunctionalserialLogsFileCmd831450969/001/logs.txt: (1.416128373s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-275165 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-275165
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-275165: exit status 115 (341.767192ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32079 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-275165 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 config get cpus: exit status 14 (87.440349ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 config get cpus: exit status 14 (62.37605ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-275165 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-275165 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 62881: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.30s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-275165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-275165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (168.151954ms)

                                                
                                                
-- stdout --
	* [functional-275165] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:05:16.922555   62187 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:05:16.922676   62187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:16.922684   62187 out.go:358] Setting ErrFile to fd 2...
	I1009 19:05:16.922688   62187 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:16.922900   62187 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 19:05:16.923500   62187 out.go:352] Setting JSON to false
	I1009 19:05:16.924496   62187 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2868,"bootTime":1728497849,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:05:16.924598   62187 start.go:139] virtualization: kvm guest
	I1009 19:05:16.926982   62187 out.go:177] * [functional-275165] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:05:16.928612   62187 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:05:16.928640   62187 notify.go:220] Checking for updates...
	I1009 19:05:16.931107   62187 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:05:16.932224   62187 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 19:05:16.933475   62187 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	I1009 19:05:16.934726   62187 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:05:16.935876   62187 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:05:16.937613   62187 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:05:16.938123   62187 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:05:16.964294   62187 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:05:16.964453   62187 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:05:17.024363   62187 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-10-09 19:05:17.011887358 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:05:17.024506   62187 docker.go:318] overlay module found
	I1009 19:05:17.026575   62187 out.go:177] * Using the docker driver based on existing profile
	I1009 19:05:17.027924   62187 start.go:297] selected driver: docker
	I1009 19:05:17.027947   62187 start.go:901] validating driver "docker" against &{Name:functional-275165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-275165 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:05:17.028055   62187 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:05:17.030365   62187 out.go:201] 
	W1009 19:05:17.031851   62187 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1009 19:05:17.033222   62187 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-275165 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-275165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-275165 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (183.070582ms)

                                                
                                                
-- stdout --
	* [functional-275165] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:05:17.053201   62285 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:05:17.053358   62285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:17.053369   62285 out.go:358] Setting ErrFile to fd 2...
	I1009 19:05:17.053376   62285 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:05:17.053799   62285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 19:05:17.054562   62285 out.go:352] Setting JSON to false
	I1009 19:05:17.055758   62285 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":2868,"bootTime":1728497849,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:05:17.055834   62285 start.go:139] virtualization: kvm guest
	I1009 19:05:17.057833   62285 out.go:177] * [functional-275165] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1009 19:05:17.059121   62285 notify.go:220] Checking for updates...
	I1009 19:05:17.059163   62285 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:05:17.060488   62285 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:05:17.061641   62285 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 19:05:17.062946   62285 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	I1009 19:05:17.064260   62285 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:05:17.065577   62285 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:05:17.067953   62285 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:05:17.068784   62285 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:05:17.101667   62285 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:05:17.101781   62285 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:05:17.164448   62285 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:34 OomKillDisable:true NGoroutines:54 SystemTime:2024-10-09 19:05:17.154496673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:05:17.164593   62285 docker.go:318] overlay module found
	I1009 19:05:17.166039   62285 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1009 19:05:17.167433   62285 start.go:297] selected driver: docker
	I1009 19:05:17.167448   62285 start.go:901] validating driver "docker" against &{Name:functional-275165 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1728382586-19774@sha256:5d8c4f6d838467365e214e2194dd0153a763e3f78723b5f2a8e06ef7b47409ec Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-275165 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1009 19:05:17.167539   62285 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:05:17.169515   62285 out.go:201] 
	W1009 19:05:17.170624   62285 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1009 19:05:17.171981   62285 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-275165 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-275165 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8p6j9" [3bc17738-db18-4b48-8bb6-f66a30392e56] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-8p6j9" [3bc17738-db18-4b48-8bb6-f66a30392e56] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004579648s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30275
functional_test.go:1675: http://192.168.49.2:30275: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-8p6j9

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30275
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0a56973e-a56e-4598-8287-f533b4d9ddc9] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003746116s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-275165 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-275165 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-275165 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-275165 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [534a0a3b-be8a-4a37-9b8e-99f479994b60] Pending
helpers_test.go:344: "sp-pod" [534a0a3b-be8a-4a37-9b8e-99f479994b60] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [534a0a3b-be8a-4a37-9b8e-99f479994b60] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.00385103s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-275165 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-275165 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-275165 delete -f testdata/storage-provisioner/pod.yaml: (1.976882808s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-275165 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bb1de110-f46e-4cd8-8d84-7a8d054e0784] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [bb1de110-f46e-4cd8-8d84-7a8d054e0784] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003477517s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-275165 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh -n functional-275165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cp functional-275165:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2186177981/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh -n functional-275165 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh -n functional-275165 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (19.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-275165 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-d2hn5" [0b9becd9-9f98-431e-ac73-8e5a5512d2a7] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-d2hn5" [0b9becd9-9f98-431e-ac73-8e5a5512d2a7] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 18.004257031s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-275165 exec mysql-6cdb49bbb-d2hn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-275165 exec mysql-6cdb49bbb-d2hn5 -- mysql -ppassword -e "show databases;": exit status 1 (107.09966ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1009 19:05:44.090120   15983 retry.go:31] will retry after 1.48232784s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-275165 exec mysql-6cdb49bbb-d2hn5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (19.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/15983/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo cat /etc/test/nested/copy/15983/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/15983.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo cat /etc/ssl/certs/15983.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/15983.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo cat /usr/share/ca-certificates/15983.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/159832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo cat /etc/ssl/certs/159832.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/159832.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo cat /usr/share/ca-certificates/159832.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-275165 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
2024/10/09 19:05:25 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 ssh "sudo systemctl is-active docker": exit status 1 (310.940718ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 ssh "sudo systemctl is-active containerd": exit status 1 (276.685047ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-275165 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-275165 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-fjr7x" [e44e70d7-1bd5-4a7e-8342-37a2fcbd51b2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-fjr7x" [e44e70d7-1bd5-4a7e-8342-37a2fcbd51b2] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004046816s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.23s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-275165 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-275165 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-275165 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 59305: os: process already finished
helpers_test.go:502: unable to terminate pid 58970: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-275165 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-275165 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-275165 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [bde2f448-4cb9-4492-90c0-2565daa71d66] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [bde2f448-4cb9-4492-90c0-2565daa71d66] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.004906886s
I1009 19:05:15.464465   15983 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 service list -o json
functional_test.go:1494: Took "595.607923ms" to run "out/minikube-linux-amd64 -p functional-275165 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30872
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30872
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-275165 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.4.64 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-275165 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdany-port4048831078/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1728500716103765773" to /tmp/TestFunctionalparallelMountCmdany-port4048831078/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1728500716103765773" to /tmp/TestFunctionalparallelMountCmdany-port4048831078/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1728500716103765773" to /tmp/TestFunctionalparallelMountCmdany-port4048831078/001/test-1728500716103765773
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (318.950255ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:05:16.423025   15983 retry.go:31] will retry after 288.608996ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct  9 19:05 created-by-test
-rw-r--r-- 1 docker docker 24 Oct  9 19:05 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct  9 19:05 test-1728500716103765773
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh cat /mount-9p/test-1728500716103765773
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-275165 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [d2dcda12-c4a0-4a83-8421-f1362aedd5e2] Pending
helpers_test.go:344: "busybox-mount" [d2dcda12-c4a0-4a83-8421-f1362aedd5e2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [d2dcda12-c4a0-4a83-8421-f1362aedd5e2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [d2dcda12-c4a0-4a83-8421-f1362aedd5e2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.004072019s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-275165 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdany-port4048831078/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.65s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "393.453044ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "53.005022ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "340.897145ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "59.218149ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-275165 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-275165
localhost/kicbase/echo-server:functional-275165
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241007-36f62932
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-275165 image ls --format short --alsologtostderr:
I1009 19:05:27.416276   66661 out.go:345] Setting OutFile to fd 1 ...
I1009 19:05:27.416412   66661 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:27.416422   66661 out.go:358] Setting ErrFile to fd 2...
I1009 19:05:27.416428   66661 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:27.416631   66661 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
I1009 19:05:27.417236   66661 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:27.417358   66661 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:27.417766   66661 cli_runner.go:164] Run: docker container inspect functional-275165 --format={{.State.Status}}
I1009 19:05:27.436182   66661 ssh_runner.go:195] Run: systemctl --version
I1009 19:05:27.436252   66661 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-275165
I1009 19:05:27.460374   66661 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/functional-275165/id_rsa Username:docker}
I1009 19:05:27.563685   66661 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-275165 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/kicbase/echo-server           | functional-275165  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/library/nginx                 | alpine             | cb8f91112b6b5 | 48.4MB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/nginx                 | latest             | 7f553e8bbc897 | 196MB  |
| localhost/minikube-local-cache-test     | functional-275165  | 35ae4ea3f76dc | 3.33kB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-275165 image ls --format table --alsologtostderr:
I1009 19:05:28.002931   67000 out.go:345] Setting OutFile to fd 1 ...
I1009 19:05:28.003224   67000 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:28.003236   67000 out.go:358] Setting ErrFile to fd 2...
I1009 19:05:28.003243   67000 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:28.003486   67000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
I1009 19:05:28.004113   67000 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:28.004252   67000 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:28.004672   67000 cli_runner.go:164] Run: docker container inspect functional-275165 --format={{.State.Status}}
I1009 19:05:28.022513   67000 ssh_runner.go:195] Run: systemctl --version
I1009 19:05:28.022576   67000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-275165
I1009 19:05:28.040301   67000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/functional-275165/id_rsa Username:docker}
I1009 19:05:28.139601   67000 ssh_runner.go:195] Run: sudo crictl images --output json
E1009 19:05:30.382413   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-275165 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97
846543"},{"id":"175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b16
4e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045"
,"repoDigests":["docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250","docker.io/library/nginx@sha256:ae136e431e76e12e5d84979ea5e2ffff4dd9589c2435c8bb9e33e6c3960111d3"],"repoTags":["docker.io/library/nginx:alpine"],"size":"48414943"},{"id":"35ae4ea3f76dc5d5433d9f624b9da1dff890202a49d116d69633a5859406eae3","repoDigests":["localhost/minikube-local-cache-test@sha256:a948f44625a614c097b9f19c52c60cbbbfe94c0285d8ea46ac85079d2862ab1a"],"repoTags":["localhost/minikube-local-cache-test:functional-275165"],"size":"3330"},{"id":"7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0","repoDigests":["docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b","docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0"],"repoTags":["docker.io/library/nginx:latest"],"size":"195818028"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-
server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-275165"],"size":"4943877"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"87190579"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ec
e7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["regist
ry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-275165 image ls --format json --alsologtostderr:
I1009 19:05:27.773320   66894 out.go:345] Setting OutFile to fd 1 ...
I1009 19:05:27.773579   66894 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:27.773588   66894 out.go:358] Setting ErrFile to fd 2...
I1009 19:05:27.773592   66894 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:27.773805   66894 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
I1009 19:05:27.774827   66894 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:27.775120   66894 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:27.776132   66894 cli_runner.go:164] Run: docker container inspect functional-275165 --format={{.State.Status}}
I1009 19:05:27.794726   66894 ssh_runner.go:195] Run: systemctl --version
I1009 19:05:27.794775   66894 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-275165
I1009 19:05:27.812569   66894 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/functional-275165/id_rsa Username:docker}
I1009 19:05:27.911693   66894 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-275165 image ls --format yaml --alsologtostderr:
- id: 35ae4ea3f76dc5d5433d9f624b9da1dff890202a49d116d69633a5859406eae3
repoDigests:
- localhost/minikube-local-cache-test@sha256:a948f44625a614c097b9f19c52c60cbbbfe94c0285d8ea46ac85079d2862ab1a
repoTags:
- localhost/minikube-local-cache-test:functional-275165
size: "3330"
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 7f553e8bbc897571642d836b31eaf6ecbe395d7641c2b24291356ed28f3f2bd0
repoDigests:
- docker.io/library/nginx@sha256:396c6e925f28fbbed95a475d27c18886289c2bbc53231534dc86c163558b5e4b
- docker.io/library/nginx@sha256:d2eb56950b84efe34f966a2b92efb1a1a2ea53e7e93b94cdf45a27cf3cd47fc0
repoTags:
- docker.io/library/nginx:latest
size: "195818028"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-275165
size: "4943877"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: cb8f91112b6b50ead202f48bbf81cec4b34c254417254efd94c803f7dd718045
repoDigests:
- docker.io/library/nginx@sha256:2140dad235c130ac861018a4e13a6bc8aea3a35f3a40e20c1b060d51a7efd250
- docker.io/library/nginx@sha256:ae136e431e76e12e5d84979ea5e2ffff4dd9589c2435c8bb9e33e6c3960111d3
repoTags:
- docker.io/library/nginx:alpine
size: "48414943"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-275165 image ls --format yaml --alsologtostderr:
I1009 19:05:27.540409   66756 out.go:345] Setting OutFile to fd 1 ...
I1009 19:05:27.540534   66756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:27.540543   66756 out.go:358] Setting ErrFile to fd 2...
I1009 19:05:27.540548   66756 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:27.540744   66756 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
I1009 19:05:27.541336   66756 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:27.541431   66756 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:27.541810   66756 cli_runner.go:164] Run: docker container inspect functional-275165 --format={{.State.Status}}
I1009 19:05:27.559949   66756 ssh_runner.go:195] Run: systemctl --version
I1009 19:05:27.560007   66756 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-275165
I1009 19:05:27.579065   66756 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/functional-275165/id_rsa Username:docker}
I1009 19:05:27.679398   66756 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 ssh pgrep buildkitd: exit status 1 (254.446395ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image build -t localhost/my-image:functional-275165 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-275165 image build -t localhost/my-image:functional-275165 testdata/build --alsologtostderr: (3.301648243s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-275165 image build -t localhost/my-image:functional-275165 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 483a5d7416f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-275165
--> 3899dfaa9c9
Successfully tagged localhost/my-image:functional-275165
3899dfaa9c91a12bfa5ac914cc58c95341115db7916c3612c1afece25d22c6af
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-275165 image build -t localhost/my-image:functional-275165 testdata/build --alsologtostderr:
I1009 19:05:27.906015   66953 out.go:345] Setting OutFile to fd 1 ...
I1009 19:05:27.906201   66953 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:27.906213   66953 out.go:358] Setting ErrFile to fd 2...
I1009 19:05:27.906218   66953 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1009 19:05:27.906397   66953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
I1009 19:05:27.907012   66953 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:27.907736   66953 config.go:182] Loaded profile config "functional-275165": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I1009 19:05:27.908148   66953 cli_runner.go:164] Run: docker container inspect functional-275165 --format={{.State.Status}}
I1009 19:05:27.928980   66953 ssh_runner.go:195] Run: systemctl --version
I1009 19:05:27.929040   66953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-275165
I1009 19:05:27.948584   66953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/functional-275165/id_rsa Username:docker}
I1009 19:05:28.047572   66953 build_images.go:161] Building image from path: /tmp/build.2397478911.tar
I1009 19:05:28.047649   66953 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1009 19:05:28.056666   66953 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2397478911.tar
I1009 19:05:28.060094   66953 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2397478911.tar: stat -c "%s %y" /var/lib/minikube/build/build.2397478911.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2397478911.tar': No such file or directory
I1009 19:05:28.060118   66953 ssh_runner.go:362] scp /tmp/build.2397478911.tar --> /var/lib/minikube/build/build.2397478911.tar (3072 bytes)
I1009 19:05:28.083769   66953 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2397478911
I1009 19:05:28.092377   66953 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2397478911 -xf /var/lib/minikube/build/build.2397478911.tar
I1009 19:05:28.101286   66953 crio.go:315] Building image: /var/lib/minikube/build/build.2397478911
I1009 19:05:28.101386   66953 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-275165 /var/lib/minikube/build/build.2397478911 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1009 19:05:31.134072   66953 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-275165 /var/lib/minikube/build/build.2397478911 --cgroup-manager=cgroupfs: (3.032646915s)
I1009 19:05:31.134167   66953 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2397478911
I1009 19:05:31.144760   66953 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2397478911.tar
I1009 19:05:31.155094   66953 build_images.go:217] Built localhost/my-image:functional-275165 from /tmp/build.2397478911.tar
I1009 19:05:31.155130   66953 build_images.go:133] succeeded building to: functional-275165
I1009 19:05:31.155135   66953 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-275165
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image load --daemon kicbase/echo-server:functional-275165 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-275165 image load --daemon kicbase/echo-server:functional-275165 --alsologtostderr: (1.196583867s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image load --daemon kicbase/echo-server:functional-275165 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-275165
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image load --daemon kicbase/echo-server:functional-275165 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdspecific-port3207774073/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.064571ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:05:22.084143   15983 retry.go:31] will retry after 258.483833ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdspecific-port3207774073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 ssh "sudo umount -f /mount-9p": exit status 1 (294.822727ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-275165 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdspecific-port3207774073/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image save kicbase/echo-server:functional-275165 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-275165 image save kicbase/echo-server:functional-275165 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (2.50239209s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup911533572/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup911533572/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup911533572/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T" /mount1: exit status 1 (485.131388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1009 19:05:23.934807   15983 retry.go:31] will retry after 274.53408ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-275165 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup911533572/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup911533572/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-275165 /tmp/TestFunctionalparallelMountCmdVerifyCleanup911533572/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image rm kicbase/echo-server:functional-275165 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-275165
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-275165 image save --daemon kicbase/echo-server:functional-275165 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-275165
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-275165
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-275165
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-275165
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-622984 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 19:06:52.304737   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-622984 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m39.8776673s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (100.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-622984 -- rollout status deployment/busybox: (4.080586559s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-66tpb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-lgmd9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-r59cv -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-66tpb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-lgmd9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-r59cv -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-66tpb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-lgmd9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-r59cv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-66tpb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-66tpb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-lgmd9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-lgmd9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-r59cv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-622984 -- exec busybox-7dff88458-r59cv -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-622984 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-622984 -v=7 --alsologtostderr: (34.872147119s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-622984 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp testdata/cp-test.txt ha-622984:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile63602764/001/cp-test_ha-622984.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984:/home/docker/cp-test.txt ha-622984-m02:/home/docker/cp-test_ha-622984_ha-622984-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m02 "sudo cat /home/docker/cp-test_ha-622984_ha-622984-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984:/home/docker/cp-test.txt ha-622984-m03:/home/docker/cp-test_ha-622984_ha-622984-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m03 "sudo cat /home/docker/cp-test_ha-622984_ha-622984-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984:/home/docker/cp-test.txt ha-622984-m04:/home/docker/cp-test_ha-622984_ha-622984-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m04 "sudo cat /home/docker/cp-test_ha-622984_ha-622984-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp testdata/cp-test.txt ha-622984-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile63602764/001/cp-test_ha-622984-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m02:/home/docker/cp-test.txt ha-622984:/home/docker/cp-test_ha-622984-m02_ha-622984.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984 "sudo cat /home/docker/cp-test_ha-622984-m02_ha-622984.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m02:/home/docker/cp-test.txt ha-622984-m03:/home/docker/cp-test_ha-622984-m02_ha-622984-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m03 "sudo cat /home/docker/cp-test_ha-622984-m02_ha-622984-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m02:/home/docker/cp-test.txt ha-622984-m04:/home/docker/cp-test_ha-622984-m02_ha-622984-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m04 "sudo cat /home/docker/cp-test_ha-622984-m02_ha-622984-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp testdata/cp-test.txt ha-622984-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile63602764/001/cp-test_ha-622984-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m03:/home/docker/cp-test.txt ha-622984:/home/docker/cp-test_ha-622984-m03_ha-622984.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984 "sudo cat /home/docker/cp-test_ha-622984-m03_ha-622984.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m03:/home/docker/cp-test.txt ha-622984-m02:/home/docker/cp-test_ha-622984-m03_ha-622984-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m02 "sudo cat /home/docker/cp-test_ha-622984-m03_ha-622984-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m03:/home/docker/cp-test.txt ha-622984-m04:/home/docker/cp-test_ha-622984-m03_ha-622984-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m04 "sudo cat /home/docker/cp-test_ha-622984-m03_ha-622984-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp testdata/cp-test.txt ha-622984-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile63602764/001/cp-test_ha-622984-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m04:/home/docker/cp-test.txt ha-622984:/home/docker/cp-test_ha-622984-m04_ha-622984.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984 "sudo cat /home/docker/cp-test_ha-622984-m04_ha-622984.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m04:/home/docker/cp-test.txt ha-622984-m02:/home/docker/cp-test_ha-622984-m04_ha-622984-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m02 "sudo cat /home/docker/cp-test_ha-622984-m04_ha-622984-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 cp ha-622984-m04:/home/docker/cp-test.txt ha-622984-m03:/home/docker/cp-test_ha-622984-m04_ha-622984-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 ssh -n ha-622984-m03 "sudo cat /home/docker/cp-test_ha-622984-m04_ha-622984-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-622984 node stop m02 -v=7 --alsologtostderr: (11.838295239s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr: exit status 7 (660.290767ms)

                                                
                                                
-- stdout --
	ha-622984
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-622984-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-622984-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-622984-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:08:43.296028   88399 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:08:43.296297   88399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:08:43.296307   88399 out.go:358] Setting ErrFile to fd 2...
	I1009 19:08:43.296312   88399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:08:43.296543   88399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 19:08:43.296723   88399 out.go:352] Setting JSON to false
	I1009 19:08:43.296751   88399 mustload.go:65] Loading cluster: ha-622984
	I1009 19:08:43.296879   88399 notify.go:220] Checking for updates...
	I1009 19:08:43.297177   88399 config.go:182] Loaded profile config "ha-622984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:08:43.297198   88399 status.go:174] checking status of ha-622984 ...
	I1009 19:08:43.297644   88399 cli_runner.go:164] Run: docker container inspect ha-622984 --format={{.State.Status}}
	I1009 19:08:43.316552   88399 status.go:371] ha-622984 host status = "Running" (err=<nil>)
	I1009 19:08:43.316622   88399 host.go:66] Checking if "ha-622984" exists ...
	I1009 19:08:43.316883   88399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-622984
	I1009 19:08:43.334384   88399 host.go:66] Checking if "ha-622984" exists ...
	I1009 19:08:43.334622   88399 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:08:43.334656   88399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-622984
	I1009 19:08:43.352267   88399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/ha-622984/id_rsa Username:docker}
	I1009 19:08:43.448235   88399 ssh_runner.go:195] Run: systemctl --version
	I1009 19:08:43.452839   88399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:08:43.463302   88399 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:08:43.511619   88399 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-10-09 19:08:43.502491815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:08:43.512265   88399 kubeconfig.go:125] found "ha-622984" server: "https://192.168.49.254:8443"
	I1009 19:08:43.512298   88399 api_server.go:166] Checking apiserver status ...
	I1009 19:08:43.512352   88399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:08:43.522766   88399 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1484/cgroup
	I1009 19:08:43.531744   88399 api_server.go:182] apiserver freezer: "9:freezer:/docker/d32d50f3488c0bd60541cc540696513df5396ceb164f50088e65d832f5c391f9/crio/crio-be3e7c0537ff9e328fd15ffcda5ca9bfac9a13cdebeb9c3c7ebd0ac483a2c5f6"
	I1009 19:08:43.531823   88399 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/d32d50f3488c0bd60541cc540696513df5396ceb164f50088e65d832f5c391f9/crio/crio-be3e7c0537ff9e328fd15ffcda5ca9bfac9a13cdebeb9c3c7ebd0ac483a2c5f6/freezer.state
	I1009 19:08:43.539676   88399 api_server.go:204] freezer state: "THAWED"
	I1009 19:08:43.539711   88399 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:08:43.543444   88399 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:08:43.543475   88399 status.go:463] ha-622984 apiserver status = Running (err=<nil>)
	I1009 19:08:43.543494   88399 status.go:176] ha-622984 status: &{Name:ha-622984 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:08:43.543515   88399 status.go:174] checking status of ha-622984-m02 ...
	I1009 19:08:43.543775   88399 cli_runner.go:164] Run: docker container inspect ha-622984-m02 --format={{.State.Status}}
	I1009 19:08:43.560749   88399 status.go:371] ha-622984-m02 host status = "Stopped" (err=<nil>)
	I1009 19:08:43.560770   88399 status.go:384] host is not running, skipping remaining checks
	I1009 19:08:43.560775   88399 status.go:176] ha-622984-m02 status: &{Name:ha-622984-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:08:43.560805   88399 status.go:174] checking status of ha-622984-m03 ...
	I1009 19:08:43.561035   88399 cli_runner.go:164] Run: docker container inspect ha-622984-m03 --format={{.State.Status}}
	I1009 19:08:43.581375   88399 status.go:371] ha-622984-m03 host status = "Running" (err=<nil>)
	I1009 19:08:43.581402   88399 host.go:66] Checking if "ha-622984-m03" exists ...
	I1009 19:08:43.581780   88399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-622984-m03
	I1009 19:08:43.599156   88399 host.go:66] Checking if "ha-622984-m03" exists ...
	I1009 19:08:43.599486   88399 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:08:43.599522   88399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-622984-m03
	I1009 19:08:43.616679   88399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/ha-622984-m03/id_rsa Username:docker}
	I1009 19:08:43.712108   88399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:08:43.722842   88399 kubeconfig.go:125] found "ha-622984" server: "https://192.168.49.254:8443"
	I1009 19:08:43.722936   88399 api_server.go:166] Checking apiserver status ...
	I1009 19:08:43.722987   88399 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:08:43.732879   88399 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1394/cgroup
	I1009 19:08:43.741775   88399 api_server.go:182] apiserver freezer: "9:freezer:/docker/013334ac55e048738607cb49a245898283237567dbcfbf418cc0d901e2ac396f/crio/crio-18ad9b43417cab816f0e8902c9c096188c1f7f112c0524eb2c7e182ef82756a0"
	I1009 19:08:43.741846   88399 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/013334ac55e048738607cb49a245898283237567dbcfbf418cc0d901e2ac396f/crio/crio-18ad9b43417cab816f0e8902c9c096188c1f7f112c0524eb2c7e182ef82756a0/freezer.state
	I1009 19:08:43.749627   88399 api_server.go:204] freezer state: "THAWED"
	I1009 19:08:43.749660   88399 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1009 19:08:43.753306   88399 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1009 19:08:43.753326   88399 status.go:463] ha-622984-m03 apiserver status = Running (err=<nil>)
	I1009 19:08:43.753334   88399 status.go:176] ha-622984-m03 status: &{Name:ha-622984-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:08:43.753351   88399 status.go:174] checking status of ha-622984-m04 ...
	I1009 19:08:43.753652   88399 cli_runner.go:164] Run: docker container inspect ha-622984-m04 --format={{.State.Status}}
	I1009 19:08:43.770914   88399 status.go:371] ha-622984-m04 host status = "Running" (err=<nil>)
	I1009 19:08:43.770937   88399 host.go:66] Checking if "ha-622984-m04" exists ...
	I1009 19:08:43.771168   88399 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-622984-m04
	I1009 19:08:43.788210   88399 host.go:66] Checking if "ha-622984-m04" exists ...
	I1009 19:08:43.788487   88399 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:08:43.788529   88399 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-622984-m04
	I1009 19:08:43.805981   88399 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/ha-622984-m04/id_rsa Username:docker}
	I1009 19:08:43.899849   88399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:08:43.909997   88399 status.go:176] ha-622984-m04 status: &{Name:ha-622984-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 node start m02 -v=7 --alsologtostderr
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-622984 node start m02 -v=7 --alsologtostderr: (19.659866349s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr: (1.196645284s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.07816573s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (198.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-622984 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-622984 -v=7 --alsologtostderr
E1009 19:09:08.443347   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:09:36.148293   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-622984 -v=7 --alsologtostderr: (36.597928862s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-622984 --wait=true -v=7 --alsologtostderr
E1009 19:10:03.581630   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:03.588012   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:03.599505   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:03.620976   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:03.662448   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:03.743933   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:03.905479   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:04.227234   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:04.869279   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:06.150760   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:08.712739   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:13.834036   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:24.075637   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:10:44.557361   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:11:25.519054   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-622984 --wait=true -v=7 --alsologtostderr: (2m41.505836662s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-622984
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (198.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-622984 node delete m03 -v=7 --alsologtostderr: (10.548843817s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 stop -v=7 --alsologtostderr
E1009 19:12:47.443369   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-622984 stop -v=7 --alsologtostderr: (35.441941752s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr: exit status 7 (101.837354ms)

                                                
                                                
-- stdout --
	ha-622984
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-622984-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-622984-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:13:12.283519  106163 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:13:12.283800  106163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:13:12.283810  106163 out.go:358] Setting ErrFile to fd 2...
	I1009 19:13:12.283814  106163 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:13:12.284039  106163 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 19:13:12.284257  106163 out.go:352] Setting JSON to false
	I1009 19:13:12.284282  106163 mustload.go:65] Loading cluster: ha-622984
	I1009 19:13:12.284341  106163 notify.go:220] Checking for updates...
	I1009 19:13:12.284736  106163 config.go:182] Loaded profile config "ha-622984": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:13:12.284757  106163 status.go:174] checking status of ha-622984 ...
	I1009 19:13:12.285202  106163 cli_runner.go:164] Run: docker container inspect ha-622984 --format={{.State.Status}}
	I1009 19:13:12.302697  106163 status.go:371] ha-622984 host status = "Stopped" (err=<nil>)
	I1009 19:13:12.302719  106163 status.go:384] host is not running, skipping remaining checks
	I1009 19:13:12.302727  106163 status.go:176] ha-622984 status: &{Name:ha-622984 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:13:12.302759  106163 status.go:174] checking status of ha-622984-m02 ...
	I1009 19:13:12.302996  106163 cli_runner.go:164] Run: docker container inspect ha-622984-m02 --format={{.State.Status}}
	I1009 19:13:12.319854  106163 status.go:371] ha-622984-m02 host status = "Stopped" (err=<nil>)
	I1009 19:13:12.319875  106163 status.go:384] host is not running, skipping remaining checks
	I1009 19:13:12.319881  106163 status.go:176] ha-622984-m02 status: &{Name:ha-622984-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:13:12.319904  106163 status.go:174] checking status of ha-622984-m04 ...
	I1009 19:13:12.320145  106163 cli_runner.go:164] Run: docker container inspect ha-622984-m04 --format={{.State.Status}}
	I1009 19:13:12.336939  106163 status.go:371] ha-622984-m04 host status = "Stopped" (err=<nil>)
	I1009 19:13:12.336969  106163 status.go:384] host is not running, skipping remaining checks
	I1009 19:13:12.336977  106163 status.go:176] ha-622984-m04 status: &{Name:ha-622984-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (81.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-622984 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 19:14:08.442432   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-622984 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.602027172s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (81.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-622984 --control-plane -v=7 --alsologtostderr
E1009 19:15:03.581604   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-622984 --control-plane -v=7 --alsologtostderr: (36.983753376s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-622984 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-229827 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1009 19:15:31.285228   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-229827 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (42.638146563s)
--- PASS: TestJSONOutput/start/Command (42.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-229827 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-229827 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-229827 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-229827 --output=json --user=testUser: (5.730181805s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-497888 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-497888 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (68.984958ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2c34f2d1-88d6-4595-9960-572124faa486","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-497888] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2a3743b8-7351-49b9-a733-7f47e5f58195","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"7acda5b6-c4c8-496e-b54d-161010596403","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"97976cc2-926c-4e6f-a1a1-f8c60e513351","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig"}}
	{"specversion":"1.0","id":"5cef4b73-462a-4eea-98c1-1f4c3dc25fd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube"}}
	{"specversion":"1.0","id":"fad24044-2cde-47f0-a2d4-a075bf33a244","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"780d27e6-d5de-4e7f-840e-9d671472787c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9be8e57d-afb4-4b8e-9df5-43f05b11bc1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-497888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-497888
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.59s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-564356 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-564356 --network=: (24.554888467s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-564356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-564356
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-564356: (2.018731533s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.59s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-018431 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-018431 --network=bridge: (21.554416914s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-018431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-018431
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-018431: (1.883205935s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.46s)

                                                
                                    
x
+
TestKicExistingNetwork (24.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1009 19:17:05.107803   15983 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1009 19:17:05.124887   15983 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1009 19:17:05.124970   15983 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1009 19:17:05.124988   15983 cli_runner.go:164] Run: docker network inspect existing-network
W1009 19:17:05.141527   15983 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1009 19:17:05.141555   15983 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1009 19:17:05.141572   15983 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1009 19:17:05.141694   15983 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1009 19:17:05.159570   15983 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-271b1525b1fa IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:e7:d9:05:c4} reservation:<nil>}
I1009 19:17:05.160073   15983 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015f90}
I1009 19:17:05.160103   15983 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1009 19:17:05.160148   15983 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1009 19:17:05.222854   15983 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-877884 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-877884 --network=existing-network: (22.954429817s)
helpers_test.go:175: Cleaning up "existing-network-877884" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-877884
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-877884: (1.833078592s)
I1009 19:17:30.027608   15983 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.94s)

                                                
                                    
x
+
TestKicCustomSubnet (24.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-207599 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-207599 --subnet=192.168.60.0/24: (22.262951032s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-207599 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-207599" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-207599
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-207599: (2.036800031s)
--- PASS: TestKicCustomSubnet (24.32s)

                                                
                                    
x
+
TestKicStaticIP (27.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-724100 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-724100 --static-ip=192.168.200.200: (25.229808687s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-724100 ip
helpers_test.go:175: Cleaning up "static-ip-724100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-724100
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-724100: (2.027063203s)
--- PASS: TestKicStaticIP (27.38s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.02s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-871223 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-871223 --driver=docker  --container-runtime=crio: (22.804796771s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-886300 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-886300 --driver=docker  --container-runtime=crio: (21.013721085s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-871223
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-886300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-886300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-886300
E1009 19:19:08.443062   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-886300: (1.830110702s)
helpers_test.go:175: Cleaning up "first-871223" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-871223
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-871223: (2.202671116s)
--- PASS: TestMinikubeProfile (49.02s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-995741 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-995741 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.401155227s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-995741 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-014661 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-014661 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.404495888s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-995741 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-995741 --alsologtostderr -v=5: (1.623232257s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-014661
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-014661: (1.181332494s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.21s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-014661
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-014661: (6.20733064s)
--- PASS: TestMountStart/serial/RestartStopped (7.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014661 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060076 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 19:20:03.581530   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:20:31.509892   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-060076 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m12.977993971s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-060076 -- rollout status deployment/busybox: (3.025320675s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-hmhqv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-hmhqv -- nslookup kubernetes.io: (1.2831223s)
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-szf6k -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-hmhqv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-szf6k -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-hmhqv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-szf6k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.52s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-hmhqv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-hmhqv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-szf6k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-060076 -- exec busybox-7dff88458-szf6k -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.72s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (28.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-060076 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-060076 -v 3 --alsologtostderr: (27.733921193s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (28.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-060076 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp testdata/cp-test.txt multinode-060076:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1251307265/001/cp-test_multinode-060076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076:/home/docker/cp-test.txt multinode-060076-m02:/home/docker/cp-test_multinode-060076_multinode-060076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m02 "sudo cat /home/docker/cp-test_multinode-060076_multinode-060076-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076:/home/docker/cp-test.txt multinode-060076-m03:/home/docker/cp-test_multinode-060076_multinode-060076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m03 "sudo cat /home/docker/cp-test_multinode-060076_multinode-060076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp testdata/cp-test.txt multinode-060076-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1251307265/001/cp-test_multinode-060076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076-m02:/home/docker/cp-test.txt multinode-060076:/home/docker/cp-test_multinode-060076-m02_multinode-060076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076 "sudo cat /home/docker/cp-test_multinode-060076-m02_multinode-060076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076-m02:/home/docker/cp-test.txt multinode-060076-m03:/home/docker/cp-test_multinode-060076-m02_multinode-060076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m03 "sudo cat /home/docker/cp-test_multinode-060076-m02_multinode-060076-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp testdata/cp-test.txt multinode-060076-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1251307265/001/cp-test_multinode-060076-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076-m03:/home/docker/cp-test.txt multinode-060076:/home/docker/cp-test_multinode-060076-m03_multinode-060076.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076 "sudo cat /home/docker/cp-test_multinode-060076-m03_multinode-060076.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 cp multinode-060076-m03:/home/docker/cp-test.txt multinode-060076-m02:/home/docker/cp-test_multinode-060076-m03_multinode-060076-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 ssh -n multinode-060076-m02 "sudo cat /home/docker/cp-test_multinode-060076-m03_multinode-060076-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.23s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-060076 node stop m03: (1.176766951s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-060076 status: exit status 7 (480.251516ms)

                                                
                                                
-- stdout --
	multinode-060076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-060076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-060076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-060076 status --alsologtostderr: exit status 7 (475.055391ms)

                                                
                                                
-- stdout --
	multinode-060076
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-060076-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-060076-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:21:37.061213  171418 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:21:37.061498  171418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:37.061508  171418 out.go:358] Setting ErrFile to fd 2...
	I1009 19:21:37.061512  171418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:21:37.061711  171418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 19:21:37.061869  171418 out.go:352] Setting JSON to false
	I1009 19:21:37.061893  171418 mustload.go:65] Loading cluster: multinode-060076
	I1009 19:21:37.061953  171418 notify.go:220] Checking for updates...
	I1009 19:21:37.062415  171418 config.go:182] Loaded profile config "multinode-060076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:21:37.062438  171418 status.go:174] checking status of multinode-060076 ...
	I1009 19:21:37.062934  171418 cli_runner.go:164] Run: docker container inspect multinode-060076 --format={{.State.Status}}
	I1009 19:21:37.083947  171418 status.go:371] multinode-060076 host status = "Running" (err=<nil>)
	I1009 19:21:37.083973  171418 host.go:66] Checking if "multinode-060076" exists ...
	I1009 19:21:37.084289  171418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-060076
	I1009 19:21:37.103640  171418 host.go:66] Checking if "multinode-060076" exists ...
	I1009 19:21:37.104018  171418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:21:37.104066  171418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-060076
	I1009 19:21:37.123089  171418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/multinode-060076/id_rsa Username:docker}
	I1009 19:21:37.220612  171418 ssh_runner.go:195] Run: systemctl --version
	I1009 19:21:37.225051  171418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:21:37.235787  171418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:21:37.281369  171418 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-10-09 19:21:37.272352124 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:21:37.281934  171418 kubeconfig.go:125] found "multinode-060076" server: "https://192.168.67.2:8443"
	I1009 19:21:37.281960  171418 api_server.go:166] Checking apiserver status ...
	I1009 19:21:37.281990  171418 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1009 19:21:37.292654  171418 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1474/cgroup
	I1009 19:21:37.301967  171418 api_server.go:182] apiserver freezer: "9:freezer:/docker/069c7c370e1a2adc3fefbfce5511397fe288967383757622e3a09d83917300c8/crio/crio-d78cadc7779ef682e5539b88e9cf0f323cb33c4c413e1547e5fd87bb5b6738f2"
	I1009 19:21:37.302030  171418 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/069c7c370e1a2adc3fefbfce5511397fe288967383757622e3a09d83917300c8/crio/crio-d78cadc7779ef682e5539b88e9cf0f323cb33c4c413e1547e5fd87bb5b6738f2/freezer.state
	I1009 19:21:37.311258  171418 api_server.go:204] freezer state: "THAWED"
	I1009 19:21:37.311292  171418 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1009 19:21:37.315097  171418 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1009 19:21:37.315124  171418 status.go:463] multinode-060076 apiserver status = Running (err=<nil>)
	I1009 19:21:37.315134  171418 status.go:176] multinode-060076 status: &{Name:multinode-060076 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:21:37.315152  171418 status.go:174] checking status of multinode-060076-m02 ...
	I1009 19:21:37.315436  171418 cli_runner.go:164] Run: docker container inspect multinode-060076-m02 --format={{.State.Status}}
	I1009 19:21:37.332499  171418 status.go:371] multinode-060076-m02 host status = "Running" (err=<nil>)
	I1009 19:21:37.332530  171418 host.go:66] Checking if "multinode-060076-m02" exists ...
	I1009 19:21:37.332770  171418 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-060076-m02
	I1009 19:21:37.349624  171418 host.go:66] Checking if "multinode-060076-m02" exists ...
	I1009 19:21:37.349899  171418 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1009 19:21:37.349934  171418 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-060076-m02
	I1009 19:21:37.367575  171418 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19780-9209/.minikube/machines/multinode-060076-m02/id_rsa Username:docker}
	I1009 19:21:37.460159  171418 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1009 19:21:37.470928  171418 status.go:176] multinode-060076-m02 status: &{Name:multinode-060076-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:21:37.470981  171418 status.go:174] checking status of multinode-060076-m03 ...
	I1009 19:21:37.471322  171418 cli_runner.go:164] Run: docker container inspect multinode-060076-m03 --format={{.State.Status}}
	I1009 19:21:37.488019  171418 status.go:371] multinode-060076-m03 host status = "Stopped" (err=<nil>)
	I1009 19:21:37.488042  171418 status.go:384] host is not running, skipping remaining checks
	I1009 19:21:37.488050  171418 status.go:176] multinode-060076-m03 status: &{Name:multinode-060076-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.13s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-060076 node start m03 -v=7 --alsologtostderr: (8.4826942s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (113.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-060076
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-060076
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-060076: (24.637672637s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060076 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-060076 --wait=true -v=8 --alsologtostderr: (1m28.365996465s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-060076
--- PASS: TestMultiNode/serial/RestartKeepsNodes (113.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-060076 node delete m03: (4.719651285s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 stop
E1009 19:24:08.443393   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-060076 stop: (23.522576698s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-060076 status: exit status 7 (88.636884ms)

                                                
                                                
-- stdout --
	multinode-060076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-060076-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-060076 status --alsologtostderr: exit status 7 (85.066288ms)

                                                
                                                
-- stdout --
	multinode-060076
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-060076-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:24:08.699590  181135 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:24:08.699706  181135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:24:08.699717  181135 out.go:358] Setting ErrFile to fd 2...
	I1009 19:24:08.699722  181135 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:24:08.699918  181135 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 19:24:08.700101  181135 out.go:352] Setting JSON to false
	I1009 19:24:08.700126  181135 mustload.go:65] Loading cluster: multinode-060076
	I1009 19:24:08.700184  181135 notify.go:220] Checking for updates...
	I1009 19:24:08.700610  181135 config.go:182] Loaded profile config "multinode-060076": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:24:08.700630  181135 status.go:174] checking status of multinode-060076 ...
	I1009 19:24:08.701127  181135 cli_runner.go:164] Run: docker container inspect multinode-060076 --format={{.State.Status}}
	I1009 19:24:08.720686  181135 status.go:371] multinode-060076 host status = "Stopped" (err=<nil>)
	I1009 19:24:08.720712  181135 status.go:384] host is not running, skipping remaining checks
	I1009 19:24:08.720720  181135 status.go:176] multinode-060076 status: &{Name:multinode-060076 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1009 19:24:08.720745  181135 status.go:174] checking status of multinode-060076-m02 ...
	I1009 19:24:08.721071  181135 cli_runner.go:164] Run: docker container inspect multinode-060076-m02 --format={{.State.Status}}
	I1009 19:24:08.738184  181135 status.go:371] multinode-060076-m02 host status = "Stopped" (err=<nil>)
	I1009 19:24:08.738224  181135 status.go:384] host is not running, skipping remaining checks
	I1009 19:24:08.738230  181135 status.go:176] multinode-060076-m02 status: &{Name:multinode-060076-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (56.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060076 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1009 19:25:03.581360   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-060076 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (56.156655659s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-060076 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (56.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-060076
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060076-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-060076-m02 --driver=docker  --container-runtime=crio: exit status 14 (68.043249ms)

                                                
                                                
-- stdout --
	* [multinode-060076-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-060076-m02' is duplicated with machine name 'multinode-060076-m02' in profile 'multinode-060076'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-060076-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-060076-m03 --driver=docker  --container-runtime=crio: (23.314022056s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-060076
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-060076: exit status 80 (269.129174ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-060076 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-060076-m03 already exists in multinode-060076-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-060076-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-060076-m03: (1.827942103s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.53s)

                                                
                                    
x
+
TestPreload (102.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-619912 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1009 19:26:26.646953   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-619912 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m15.659314013s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-619912 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-619912 image pull gcr.io/k8s-minikube/busybox: (1.143872842s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-619912
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-619912: (5.662646228s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-619912 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-619912 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (17.193170167s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-619912 image list
helpers_test.go:175: Cleaning up "test-preload-619912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-619912
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-619912: (2.250654397s)
--- PASS: TestPreload (102.13s)

                                                
                                    
x
+
TestScheduledStopUnix (98.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-379629 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-379629 --memory=2048 --driver=docker  --container-runtime=crio: (23.034242544s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-379629 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-379629 -n scheduled-stop-379629
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-379629 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1009 19:27:40.436250   15983 retry.go:31] will retry after 144.362µs: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.436688   15983 retry.go:31] will retry after 196.369µs: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.437800   15983 retry.go:31] will retry after 185.582µs: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.438960   15983 retry.go:31] will retry after 301.794µs: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.440097   15983 retry.go:31] will retry after 649.358µs: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.441241   15983 retry.go:31] will retry after 886.975µs: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.442417   15983 retry.go:31] will retry after 1.685341ms: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.444616   15983 retry.go:31] will retry after 962.862µs: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.445756   15983 retry.go:31] will retry after 3.666318ms: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.449992   15983 retry.go:31] will retry after 5.602913ms: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.456240   15983 retry.go:31] will retry after 4.208372ms: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.461466   15983 retry.go:31] will retry after 4.691685ms: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.466910   15983 retry.go:31] will retry after 7.847255ms: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.475236   15983 retry.go:31] will retry after 28.748104ms: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
I1009 19:27:40.504498   15983 retry.go:31] will retry after 34.029108ms: open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/scheduled-stop-379629/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-379629 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-379629 -n scheduled-stop-379629
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-379629
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-379629 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-379629
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-379629: exit status 7 (71.015797ms)

                                                
                                                
-- stdout --
	scheduled-stop-379629
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-379629 -n scheduled-stop-379629
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-379629 -n scheduled-stop-379629: exit status 7 (67.225746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-379629" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-379629
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-379629: (4.37522998s)
--- PASS: TestScheduledStopUnix (98.73s)

                                                
                                    
x
+
TestInsufficientStorage (12.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-222175 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-222175 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.05908875s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"114054ea-7796-434b-b4e1-462c8ede8592","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-222175] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"29d6b768-6f5f-4b31-a478-d6f52c5e67f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19780"}}
	{"specversion":"1.0","id":"a28ba41a-a416-4e18-a967-3f5dc01b33cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ce2a1067-a161-4631-9939-4aaed8ae00e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig"}}
	{"specversion":"1.0","id":"248f50ef-30a4-4f11-8493-ca291b97d09d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube"}}
	{"specversion":"1.0","id":"9414f1e6-bc8e-4932-a47d-136a0a46a3d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6c4a254c-77be-41f9-ae57-1edc2767b2f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"7f299742-ed1e-4d63-93cc-84657b57144a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"1a5df901-9337-4b87-91f9-8bca88fdb9f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2dbdbd09-45a0-480c-8a85-c8fbb656884d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7824edb-1d79-491d-b3b5-0ee00a76dc09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"7dba801a-f52b-4f21-8747-da7f4a55cce5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-222175\" primary control-plane node in \"insufficient-storage-222175\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"82e2840d-59c8-4e0d-9a05-f506e2ebcd1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1728382586-19774 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"46ed9932-e944-4a0b-9f4c-1cb90bc1b4c5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"09d02540-22f7-4c08-9b9b-b3c1536760dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-222175 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-222175 --output=json --layout=cluster: exit status 7 (267.038221ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-222175","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-222175","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:29:06.040854  203338 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-222175" does not appear in /home/jenkins/minikube-integration/19780-9209/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-222175 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-222175 --output=json --layout=cluster: exit status 7 (261.954046ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-222175","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-222175","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1009 19:29:06.303475  203435 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-222175" does not appear in /home/jenkins/minikube-integration/19780-9209/kubeconfig
	E1009 19:29:06.313124  203435 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/insufficient-storage-222175/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-222175" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-222175
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-222175: (1.81148172s)
--- PASS: TestInsufficientStorage (12.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (68.42s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1966926850 start -p running-upgrade-126450 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1966926850 start -p running-upgrade-126450 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.132834363s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-126450 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-126450 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.560539486s)
helpers_test.go:175: Cleaning up "running-upgrade-126450" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-126450
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-126450: (2.36795271s)
--- PASS: TestRunningBinaryUpgrade (68.42s)

                                                
                                    
x
+
TestKubernetesUpgrade (360.02s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-633081 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-633081 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.253212816s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-633081
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-633081: (1.208853504s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-633081 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-633081 status --format={{.Host}}: exit status 7 (65.736689ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-633081 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-633081 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.593985615s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-633081 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-633081 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-633081 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (75.366602ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-633081] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-633081
	    minikube start -p kubernetes-upgrade-633081 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6330812 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-633081 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-633081 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-633081 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (39.831049015s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-633081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-633081
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-633081: (2.913804941s)
--- PASS: TestKubernetesUpgrade (360.02s)

                                                
                                    
x
+
TestMissingContainerUpgrade (131.52s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2327706394 start -p missing-upgrade-881895 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2327706394 start -p missing-upgrade-881895 --memory=2200 --driver=docker  --container-runtime=crio: (59.883590993s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-881895
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-881895: (10.393741623s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-881895
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-881895 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-881895 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (58.434297977s)
helpers_test.go:175: Cleaning up "missing-upgrade-881895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-881895
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-881895: (2.050077612s)
--- PASS: TestMissingContainerUpgrade (131.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-991267 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-991267 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (93.627962ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-991267] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-991267 --driver=docker  --container-runtime=crio
E1009 19:29:08.443139   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-991267 --driver=docker  --container-runtime=crio: (34.909716708s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-991267 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-433038 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-433038 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (202.404589ms)

                                                
                                                
-- stdout --
	* [false-433038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19780
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1009 19:29:12.021912  205710 out.go:345] Setting OutFile to fd 1 ...
	I1009 19:29:12.022215  205710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:29:12.022229  205710 out.go:358] Setting ErrFile to fd 2...
	I1009 19:29:12.022235  205710 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1009 19:29:12.022578  205710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19780-9209/.minikube/bin
	I1009 19:29:12.023411  205710 out.go:352] Setting JSON to false
	I1009 19:29:12.024869  205710 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4303,"bootTime":1728497849,"procs":233,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1009 19:29:12.024997  205710 start.go:139] virtualization: kvm guest
	I1009 19:29:12.028727  205710 out.go:177] * [false-433038] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1009 19:29:12.030292  205710 out.go:177]   - MINIKUBE_LOCATION=19780
	I1009 19:29:12.030392  205710 notify.go:220] Checking for updates...
	I1009 19:29:12.032865  205710 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1009 19:29:12.034175  205710 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19780-9209/kubeconfig
	I1009 19:29:12.035591  205710 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19780-9209/.minikube
	I1009 19:29:12.039235  205710 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1009 19:29:12.040811  205710 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1009 19:29:12.042679  205710 config.go:182] Loaded profile config "NoKubernetes-991267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:29:12.042800  205710 config.go:182] Loaded profile config "force-systemd-env-007654": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:29:12.042897  205710 config.go:182] Loaded profile config "offline-crio-980106": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I1009 19:29:12.043008  205710 driver.go:394] Setting default libvirt URI to qemu:///system
	I1009 19:29:12.077285  205710 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1009 19:29:12.077580  205710 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1009 19:29:12.147868  205710 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:90 SystemTime:2024-10-09 19:29:12.134056865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647939584 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: br
idge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1009 19:29:12.148061  205710 docker.go:318] overlay module found
	I1009 19:29:12.149943  205710 out.go:177] * Using the docker driver based on user configuration
	I1009 19:29:12.151397  205710 start.go:297] selected driver: docker
	I1009 19:29:12.151420  205710 start.go:901] validating driver "docker" against <nil>
	I1009 19:29:12.151437  205710 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1009 19:29:12.153965  205710 out.go:201] 
	W1009 19:29:12.155397  205710 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1009 19:29:12.159250  205710 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-433038 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-433038" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-433038

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-433038"

                                                
                                                
----------------------- debugLogs end: false-433038 [took: 7.401153939s] --------------------------------
helpers_test.go:175: Cleaning up "false-433038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-433038
--- PASS: TestNetworkPlugins/group/false (7.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-991267 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-991267 --no-kubernetes --driver=docker  --container-runtime=crio: (6.669725073s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-991267 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-991267 status -o json: exit status 2 (288.930606ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-991267","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-991267
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-991267: (1.975376343s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (12.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-991267 --no-kubernetes --driver=docker  --container-runtime=crio
E1009 19:30:03.580916   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-991267 --no-kubernetes --driver=docker  --container-runtime=crio: (12.612094907s)
--- PASS: TestNoKubernetes/serial/Start (12.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-991267 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-991267 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.698868ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (11.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (2.595400671s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:179: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (9.120512662s)
--- PASS: TestNoKubernetes/serial/ProfileList (11.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-991267
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-991267: (1.838555361s)
--- PASS: TestNoKubernetes/serial/Stop (1.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-991267 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-991267 --driver=docker  --container-runtime=crio: (8.196465039s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-991267 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-991267 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.101662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (55.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4004768044 start -p stopped-upgrade-255744 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4004768044 start -p stopped-upgrade-255744 --memory=2200 --vm-driver=docker  --container-runtime=crio: (27.125564227s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4004768044 -p stopped-upgrade-255744 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4004768044 -p stopped-upgrade-255744 stop: (2.296307121s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-255744 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-255744 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.618706529s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (55.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-255744
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestPause/serial/Start (44.58s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-109117 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-109117 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (44.575421015s)
--- PASS: TestPause/serial/Start (44.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.455494102s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (21.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-109117 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-109117 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.037631813s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (21.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-433038 "pgrep -a kubelet"
I1009 19:33:20.888360   15983 config.go:182] Loaded profile config "auto-433038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-433038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qrmls" [47f01fec-d97c-4993-babc-db00735bfa9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qrmls" [47f01fec-d97c-4993-babc-db00735bfa9c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004450946s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.20s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-109117 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.3s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-109117 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-109117 --output=json --layout=cluster: exit status 2 (294.774037ms)

                                                
                                                
-- stdout --
	{"Name":"pause-109117","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-109117","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.30s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-109117 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-109117 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.63s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-109117 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-109117 --alsologtostderr -v=5: (2.630166401s)
--- PASS: TestPause/serial/DeletePaused (2.63s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.78s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-109117
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-109117: exit status 1 (24.792107ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-109117: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-433038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.514849468s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.589315043s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1009 19:34:08.442529   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.012882697s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-5vgdx" [179ba392-ee95-4564-a5cc-1f41f47254c4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004105567s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-433038 "pgrep -a kubelet"
I1009 19:34:23.268646   15983 config.go:182] Loaded profile config "kindnet-433038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-433038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-z66sf" [034baf29-8f6a-400c-9293-a58ac098f211] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-z66sf" [034baf29-8f6a-400c-9293-a58ac098f211] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004260462s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7fg9t" [d58b5465-9c8a-4bf8-89a4-252d9b203fc2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004142088s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-433038 "pgrep -a kubelet"
I1009 19:34:33.191019   15983 config.go:182] Loaded profile config "flannel-433038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-433038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cb788" [dd758292-387a-4739-8d37-3a1e72bc4b07] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cb788" [dd758292-387a-4739-8d37-3a1e72bc4b07] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004364632s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-433038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-433038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.785493407s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-433038 "pgrep -a kubelet"
I1009 19:35:02.278657   15983 config.go:182] Loaded profile config "enable-default-cni-433038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-433038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x7v6t" [55d3d4b6-70c5-4bcd-acc8-107a2718effa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x7v6t" [55d3d4b6-70c5-4bcd-acc8-107a2718effa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003893483s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1009 19:35:03.581435   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.015795088s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-433038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-433038 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.466346265s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-433038 "pgrep -a kubelet"
I1009 19:35:52.272318   15983 config.go:182] Loaded profile config "custom-flannel-433038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-433038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qpdxf" [9f143003-b650-4639-aaa1-fe6d60f21ffb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qpdxf" [9f143003-b650-4639-aaa1-fe6d60f21ffb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004105401s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-433038 "pgrep -a kubelet"
I1009 19:36:01.629029   15983 config.go:182] Loaded profile config "bridge-433038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-433038 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ls7qc" [e537c8ef-10c1-4c74-a205-70568ee17304] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ls7qc" [e537c8ef-10c1-4c74-a205-70568ee17304] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005214227s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-433038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (131.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-071531 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-071531 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m11.66870339s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (131.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-433038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-637667 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-637667 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (59.402024583s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vcmrq" [bc625fb3-2465-4bc1-9556-de6e623abef5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005431073s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-190407 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-190407 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (52.552190666s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-433038 "pgrep -a kubelet"
I1009 19:36:37.013405   15983 config.go:182] Loaded profile config "calico-433038": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-433038 replace --force -f testdata/netcat-deployment.yaml
I1009 19:36:37.441312   15983 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1009 19:36:37.453116   15983 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6r9gn" [1c9b7740-1410-4b1b-aa7e-da99d15a6d3d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6r9gn" [1c9b7740-1410-4b1b-aa7e-da99d15a6d3d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004186544s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-433038 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-433038 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)
E1009 19:41:22.340846   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:24.392465   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:30.650586   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:30.657026   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:30.668395   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:30.689855   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:30.731336   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:30.812796   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:30.975059   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:31.296895   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:31.939106   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:33.220633   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:33.440220   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:35.782541   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:40.903939   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:42.822911   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:51.146023   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:42:00.864160   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:42:10.784633   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (25.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-340536 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-340536 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (25.531219812s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (25.53s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-190407 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [15b8c6cb-b56d-4581-8fc9-7899b1db9814] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [15b8c6cb-b56d-4581-8fc9-7899b1db9814] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004337853s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-190407 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-637667 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [99d1f082-fc5b-4ce0-986d-c1e8f8cb6505] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [99d1f082-fc5b-4ce0-986d-c1e8f8cb6505] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004321251s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-637667 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-190407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-190407 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-637667 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-637667 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-190407 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-190407 --alsologtostderr -v=3: (11.97513907s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-637667 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-637667 --alsologtostderr -v=3: (11.950454762s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-340536 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.77s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-340536 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-340536 --alsologtostderr -v=3: (1.185548089s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340536 -n newest-cni-340536
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340536 -n newest-cni-340536: exit status 7 (67.760485ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-340536 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-340536 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-340536 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (12.775951837s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-340536 -n newest-cni-340536
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190407 -n embed-certs-190407
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190407 -n embed-certs-190407: exit status 7 (83.639487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-190407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (264.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-190407 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-190407 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m23.724764908s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-190407 -n embed-certs-190407
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (264.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-637667 -n no-preload-637667
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-637667 -n no-preload-637667: exit status 7 (104.343785ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-637667 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-637667 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-637667 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.928674889s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-637667 -n no-preload-637667
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-340536 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-340536 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-340536 --alsologtostderr -v=1: (1.201343579s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-340536 -n newest-cni-340536
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-340536 -n newest-cni-340536: exit status 2 (346.650579ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-340536 -n newest-cni-340536
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-340536 -n newest-cni-340536: exit status 2 (333.363578ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-340536 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-340536 -n newest-cni-340536
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-340536 -n newest-cni-340536
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-994533 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-994533 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (48.184761157s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-071531 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6a3331ea-5572-4428-94a8-4f606b1f74a5] Pending
helpers_test.go:344: "busybox" [6a3331ea-5572-4428-94a8-4f606b1f74a5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1009 19:38:21.077971   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:21.084415   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:21.095833   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:21.117306   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:21.158764   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:21.240305   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [6a3331ea-5572-4428-94a8-4f606b1f74a5] Running
E1009 19:38:21.402472   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:21.723983   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:22.365388   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:23.647308   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:38:26.208973   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003395847s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-071531 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-071531 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-071531 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-071531 --alsologtostderr -v=3
E1009 19:38:31.330862   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-071531 --alsologtostderr -v=3: (12.035049741s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071531 -n old-k8s-version-071531
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071531 -n old-k8s-version-071531: exit status 7 (75.992898ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-071531 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (143.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-071531 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1009 19:38:41.572451   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-071531 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m23.438446041s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-071531 -n old-k8s-version-071531
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (143.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-994533 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cd334840-6644-4cf2-9f3f-8c48204e102a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cd334840-6644-4cf2-9f3f-8c48204e102a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.0042079s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-994533 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-994533 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-994533 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-994533 --alsologtostderr -v=3
E1009 19:39:02.054310   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-994533 --alsologtostderr -v=3: (12.214282705s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533: exit status 7 (76.3623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-994533 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (274.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-994533 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E1009 19:39:08.443144   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/addons-814968/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:17.003961   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:17.010389   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:17.021775   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:17.043352   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:17.084800   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:17.166296   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:17.327882   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:17.649946   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:18.291997   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:19.573990   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:22.135686   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:26.924495   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:26.930924   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:26.942365   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:26.963800   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:27.005342   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:27.086792   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:27.248429   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:27.257862   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:27.570666   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:28.212257   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:29.494249   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:32.056279   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:37.177857   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:37.499301   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:43.016427   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:47.419227   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:39:57.980900   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:02.453962   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:02.460400   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:02.471926   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:02.493349   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:02.534821   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:02.616323   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:02.777865   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:03.099591   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:03.581099   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/functional-275165/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:03.741708   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:05.023589   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:07.585266   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:07.901222   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:12.706720   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:22.948827   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:38.942298   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/kindnet-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:43.430362   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/enable-default-cni-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:48.862615   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:52.464002   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:52.470404   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:52.481772   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:52.503250   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:52.544676   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:52.626107   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:52.788394   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:53.109887   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:53.751766   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:55.033786   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:40:57.595680   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:01.846026   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:01.852472   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:01.863870   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:01.885288   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:01.926726   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:02.008203   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:02.169772   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:02.491864   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:02.717475   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:03.133848   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-994533 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m34.211586235s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (274.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-l272k" [3be17b74-c4ea-4a20-8e38-77868164b5b5] Running
E1009 19:41:04.416029   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:04.938446   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:06.978011   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004307779s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-l272k" [3be17b74-c4ea-4a20-8e38-77868164b5b5] Running
E1009 19:41:12.099434   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:41:12.958795   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004391527s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-071531 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-071531 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-071531 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071531 -n old-k8s-version-071531
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071531 -n old-k8s-version-071531: exit status 2 (295.544943ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-071531 -n old-k8s-version-071531
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-071531 -n old-k8s-version-071531: exit status 2 (293.373987ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-071531 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-071531 -n old-k8s-version-071531
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-071531 -n old-k8s-version-071531
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q7x8d" [83d0b87b-8167-4408-aed9-018c73a6870f] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004432386s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2zcc6" [26722331-5e3a-4179-8090-abbc3392c19f] Running
E1009 19:42:11.628299   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/calico-433038/client.crt: no such file or directory" logger="UnhandledError"
E1009 19:42:14.401777   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/custom-flannel-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004289659s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-q7x8d" [83d0b87b-8167-4408-aed9-018c73a6870f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003789449s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-637667 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2zcc6" [26722331-5e3a-4179-8090-abbc3392c19f] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004522359s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-190407 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-637667 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-190407 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-637667 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-637667 -n no-preload-637667
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-637667 -n no-preload-637667: exit status 2 (310.690095ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-637667 -n no-preload-637667
E1009 19:42:23.784994   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-637667 -n no-preload-637667: exit status 2 (302.127383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-637667 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-637667 -n no-preload-637667
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-637667 -n no-preload-637667
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-190407 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190407 -n embed-certs-190407
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190407 -n embed-certs-190407: exit status 2 (309.707117ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-190407 -n embed-certs-190407
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-190407 -n embed-certs-190407: exit status 2 (313.678695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-190407 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-190407 -n embed-certs-190407
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-190407 -n embed-certs-190407
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-28jsv" [f274d72c-7ca3-4f3d-9092-f2de521a0989] Running
E1009 19:43:45.707447   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/bridge-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003647739s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-28jsv" [f274d72c-7ca3-4f3d-9092-f2de521a0989] Running
E1009 19:43:48.780690   15983 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19780-9209/.minikube/profiles/auto-433038/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004459259s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-994533 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-994533 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-994533 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533: exit status 2 (282.967338ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533: exit status 2 (284.080766ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-994533 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-994533 -n default-k8s-diff-port-994533
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.56s)

                                                
                                    

Test skip (25/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:785: skipping: crio not supported
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 -p addons-814968 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-433038 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-433038" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-433038

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-433038"

                                                
                                                
----------------------- debugLogs end: kubenet-433038 [took: 3.639813898s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-433038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-433038
--- SKIP: TestNetworkPlugins/group/kubenet (3.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-433038 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-433038" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-433038

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-433038" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-433038"

                                                
                                                
----------------------- debugLogs end: cilium-433038 [took: 3.792060942s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-433038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-433038
--- SKIP: TestNetworkPlugins/group/cilium (3.96s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-659476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-659476
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard