Test Report: Docker_Linux_crio 20045

                    
                      70ee1ceb4b2f7849aa4717a6092bbfa282d9029b:2024-12-05:37344
                    
                

Test fail (15/329)

x
+
TestAddons/parallel/Ingress (492.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-630093 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-630093 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-630093 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [033304b8-dc25-498d-9212-9e1e40bc9c12] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestAddons/parallel/Ingress: WARNING: pod list for "default" "run=nginx" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:250: ***** TestAddons/parallel/Ingress: pod "run=nginx" failed to start within 8m0s: context deadline exceeded ****
addons_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-630093 -n addons-630093
addons_test.go:250: TestAddons/parallel/Ingress: showing logs for failed pods as of 2024-12-04 23:22:26.474300605 +0000 UTC m=+706.340285713
addons_test.go:250: (dbg) Run:  kubectl --context addons-630093 describe po nginx -n default
addons_test.go:250: (dbg) kubectl --context addons-630093 describe po nginx -n default:
Name:             nginx
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-630093/192.168.49.2
Start Time:       Wed, 04 Dec 2024 23:14:26 +0000
Labels:           run=nginx
Annotations:      <none>
Status:           Pending
IP:               10.244.0.27
IPs:
IP:  10.244.0.27
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bg2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-49bg2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  8m                   default-scheduler  Successfully assigned default/nginx to addons-630093
Normal   Pulling    3m7s (x4 over 8m)    kubelet            Pulling image "docker.io/nginx:alpine"
Warning  Failed     115s (x4 over 7m1s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     115s (x4 over 7m1s)  kubelet            Error: ErrImagePull
Normal   BackOff    86s (x7 over 7m1s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     86s (x7 over 7m1s)   kubelet            Error: ImagePullBackOff
addons_test.go:250: (dbg) Run:  kubectl --context addons-630093 logs nginx -n default
addons_test.go:250: (dbg) Non-zero exit: kubectl --context addons-630093 logs nginx -n default: exit status 1 (65.967083ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:250: kubectl --context addons-630093 logs nginx -n default: exit status 1
addons_test.go:251: failed waiting for ngnix pod: run=nginx within 8m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-630093
helpers_test.go:235: (dbg) docker inspect addons-630093:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8",
	        "Created": "2024-12-04T23:11:16.797897353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389943,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-04T23:11:16.916347418Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/hosts",
	        "LogPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8-json.log",
	        "Name": "/addons-630093",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-630093:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-630093",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2-init/diff:/var/lib/docker/overlay2/e1057f3484b1ab78c06169089ecae0d5a5ffb4d6954d3cd93f0938b7adf18020/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-630093",
	                "Source": "/var/lib/docker/volumes/addons-630093/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-630093",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-630093",
	                "name.minikube.sigs.k8s.io": "addons-630093",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "38d3a3f6bb8d75ec22d0acfa9ec923dac8873b55e0bf68a977ec8a7eab9fc43d",
	            "SandboxKey": "/var/run/docker/netns/38d3a3f6bb8d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-630093": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a921fd89d48682e01ff03a455275f7258f4c5b5f271375ec1d96882eeae0da5a",
	                    "EndpointID": "1045d162f6b6ab28f4f633530bdbe7b45cc7c49fe1d735b103b4e8f31f8aba3e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-630093",
	                        "172acc3450ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-630093 -n addons-630093
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 logs -n 25: (1.185483272s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-701357              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-287298              | download-only-287298   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-701357              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | --download-only -p                   | download-docker-758817 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | download-docker-758817               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-758817            | download-docker-758817 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | --download-only -p                   | binary-mirror-223027   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | binary-mirror-223027                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45271               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-223027              | binary-mirror-223027   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| addons  | disable dashboard -p                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | addons-630093                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | addons-630093                        |                        |         |         |                     |                     |
	| start   | -p addons-630093 --wait=true         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:13 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:13 UTC | 04 Dec 24 23:13 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:13 UTC | 04 Dec 24 23:14 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | -p addons-630093                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-630093 ip                     | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:19 UTC | 04 Dec 24 23:20 UTC |
	|         | storage-provisioner-rancher          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:20 UTC | 04 Dec 24 23:20 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:20 UTC | 04 Dec 24 23:20 UTC |
	|         | disable cloud-spanner                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:20 UTC | 04 Dec 24 23:20 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:20 UTC | 04 Dec 24 23:20 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:10:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:10:54.556147  389201 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:10:54.556275  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:54.556285  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:10:54.556289  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:54.556510  389201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:10:54.557204  389201 out.go:352] Setting JSON to false
	I1204 23:10:54.558202  389201 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6804,"bootTime":1733347051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:10:54.558281  389201 start.go:139] virtualization: kvm guest
	I1204 23:10:54.560449  389201 out.go:177] * [addons-630093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:10:54.561800  389201 notify.go:220] Checking for updates...
	I1204 23:10:54.561821  389201 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:10:54.563229  389201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:10:54.564678  389201 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:10:54.566233  389201 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:10:54.567553  389201 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:10:54.568781  389201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:10:54.570554  389201 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:10:54.592245  389201 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:10:54.592340  389201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:54.635748  389201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:54.62674737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:54.635854  389201 docker.go:318] overlay module found
	I1204 23:10:54.637780  389201 out.go:177] * Using the docker driver based on user configuration
	I1204 23:10:54.639298  389201 start.go:297] selected driver: docker
	I1204 23:10:54.639319  389201 start.go:901] validating driver "docker" against <nil>
	I1204 23:10:54.639333  389201 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:10:54.640090  389201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:54.684497  389201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:54.676209306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:54.684673  389201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:10:54.684915  389201 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:10:54.686872  389201 out.go:177] * Using Docker driver with root privileges
	I1204 23:10:54.688173  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:10:54.688255  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:10:54.688267  389201 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:10:54.688343  389201 start.go:340] cluster config:
	{Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:10:54.689848  389201 out.go:177] * Starting "addons-630093" primary control-plane node in "addons-630093" cluster
	I1204 23:10:54.691334  389201 cache.go:121] Beginning downloading kic base image for docker with crio
	I1204 23:10:54.692886  389201 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:10:54.694391  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:10:54.694445  389201 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:10:54.694446  389201 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:10:54.694486  389201 cache.go:56] Caching tarball of preloaded images
	I1204 23:10:54.694592  389201 preload.go:172] Found /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:10:54.694609  389201 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:10:54.695076  389201 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json ...
	I1204 23:10:54.695108  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json: {Name:mk972e12a39ea9a33ae63a1f9239f64d658df51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:10:54.710108  389201 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:54.710258  389201 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1204 23:10:54.710280  389201 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1204 23:10:54.710287  389201 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1204 23:10:54.710299  389201 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1204 23:10:54.710311  389201 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1204 23:11:08.081763  389201 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1204 23:11:08.081807  389201 cache.go:194] Successfully downloaded all kic artifacts
	I1204 23:11:08.081860  389201 start.go:360] acquireMachinesLock for addons-630093: {Name:mk65aca0e5e36a044494f94ee0e0497ac2b0ebab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:08.081970  389201 start.go:364] duration metric: took 86.786µs to acquireMachinesLock for "addons-630093"
	I1204 23:11:08.081996  389201 start.go:93] Provisioning new machine with config: &{Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:08.082085  389201 start.go:125] createHost starting for "" (driver="docker")
	I1204 23:11:08.248667  389201 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1204 23:11:08.249041  389201 start.go:159] libmachine.API.Create for "addons-630093" (driver="docker")
	I1204 23:11:08.249091  389201 client.go:168] LocalClient.Create starting
	I1204 23:11:08.249258  389201 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem
	I1204 23:11:08.313688  389201 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem
	I1204 23:11:08.644970  389201 cli_runner.go:164] Run: docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1204 23:11:08.660700  389201 cli_runner.go:211] docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1204 23:11:08.660788  389201 network_create.go:284] running [docker network inspect addons-630093] to gather additional debugging logs...
	I1204 23:11:08.660826  389201 cli_runner.go:164] Run: docker network inspect addons-630093
	W1204 23:11:08.677347  389201 cli_runner.go:211] docker network inspect addons-630093 returned with exit code 1
	I1204 23:11:08.677402  389201 network_create.go:287] error running [docker network inspect addons-630093]: docker network inspect addons-630093: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-630093 not found
	I1204 23:11:08.677421  389201 network_create.go:289] output of [docker network inspect addons-630093]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-630093 not found
	
	** /stderr **
	I1204 23:11:08.677519  389201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:11:08.695034  389201 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016ec7e0}
	I1204 23:11:08.695093  389201 network_create.go:124] attempt to create docker network addons-630093 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1204 23:11:08.695152  389201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-630093 addons-630093
	I1204 23:11:08.969618  389201 network_create.go:108] docker network addons-630093 192.168.49.0/24 created
	I1204 23:11:08.969673  389201 kic.go:121] calculated static IP "192.168.49.2" for the "addons-630093" container
	I1204 23:11:08.969756  389201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1204 23:11:08.986135  389201 cli_runner.go:164] Run: docker volume create addons-630093 --label name.minikube.sigs.k8s.io=addons-630093 --label created_by.minikube.sigs.k8s.io=true
	I1204 23:11:09.028135  389201 oci.go:103] Successfully created a docker volume addons-630093
	I1204 23:11:09.028233  389201 cli_runner.go:164] Run: docker run --rm --name addons-630093-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --entrypoint /usr/bin/test -v addons-630093:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1204 23:11:12.239841  389201 cli_runner.go:217] Completed: docker run --rm --name addons-630093-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --entrypoint /usr/bin/test -v addons-630093:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (3.211561235s)
	I1204 23:11:12.239873  389201 oci.go:107] Successfully prepared a docker volume addons-630093
	I1204 23:11:12.239893  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:12.239931  389201 kic.go:194] Starting extracting preloaded images to volume ...
	I1204 23:11:12.240003  389201 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-630093:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1204 23:11:16.734062  389201 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-630093:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.493971774s)
	I1204 23:11:16.734103  389201 kic.go:203] duration metric: took 4.49416848s to extract preloaded images to volume ...
	W1204 23:11:16.734242  389201 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1204 23:11:16.734340  389201 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1204 23:11:16.781802  389201 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-630093 --name addons-630093 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-630093 --network addons-630093 --ip 192.168.49.2 --volume addons-630093:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1204 23:11:17.088338  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Running}}
	I1204 23:11:17.106885  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.125610  389201 cli_runner.go:164] Run: docker exec addons-630093 stat /var/lib/dpkg/alternatives/iptables
	I1204 23:11:17.168914  389201 oci.go:144] the created container "addons-630093" has a running status.
	I1204 23:11:17.168961  389201 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa...
	I1204 23:11:17.214837  389201 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1204 23:11:17.235866  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.253714  389201 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1204 23:11:17.253744  389201 kic_runner.go:114] Args: [docker exec --privileged addons-630093 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1204 23:11:17.295280  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.314090  389201 machine.go:93] provisionDockerMachine start ...
	I1204 23:11:17.314213  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:17.333326  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:17.333585  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:17.333604  389201 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 23:11:17.334344  389201 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53382->127.0.0.1:33140: read: connection reset by peer
	I1204 23:11:20.462359  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630093
	
	I1204 23:11:20.462394  389201 ubuntu.go:169] provisioning hostname "addons-630093"
	I1204 23:11:20.462459  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.480144  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:20.480382  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:20.480401  389201 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-630093 && echo "addons-630093" | sudo tee /etc/hostname
	I1204 23:11:20.617685  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630093
	
	I1204 23:11:20.617755  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.634927  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:20.635110  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:20.635127  389201 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-630093' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-630093/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-630093' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:11:20.762943  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:11:20.762974  389201 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20045-381016/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-381016/.minikube}
	I1204 23:11:20.763024  389201 ubuntu.go:177] setting up certificates
	I1204 23:11:20.763037  389201 provision.go:84] configureAuth start
	I1204 23:11:20.763097  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:20.780798  389201 provision.go:143] copyHostCerts
	I1204 23:11:20.780875  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/cert.pem (1123 bytes)
	I1204 23:11:20.780993  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/key.pem (1679 bytes)
	I1204 23:11:20.781063  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/ca.pem (1082 bytes)
	I1204 23:11:20.781117  389201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem org=jenkins.addons-630093 san=[127.0.0.1 192.168.49.2 addons-630093 localhost minikube]
	I1204 23:11:20.868299  389201 provision.go:177] copyRemoteCerts
	I1204 23:11:20.868362  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:11:20.868401  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.885888  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:20.979351  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:11:21.002115  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:11:21.025135  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 23:11:21.048097  389201 provision.go:87] duration metric: took 285.042631ms to configureAuth
	I1204 23:11:21.048133  389201 ubuntu.go:193] setting minikube options for container-runtime
	I1204 23:11:21.048329  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:21.048491  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.065589  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:21.065803  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:21.065829  389201 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:11:21.286767  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:11:21.286801  389201 machine.go:96] duration metric: took 3.972682372s to provisionDockerMachine
	I1204 23:11:21.286818  389201 client.go:171] duration metric: took 13.037716692s to LocalClient.Create
	I1204 23:11:21.286846  389201 start.go:167] duration metric: took 13.037808895s to libmachine.API.Create "addons-630093"
	I1204 23:11:21.286858  389201 start.go:293] postStartSetup for "addons-630093" (driver="docker")
	I1204 23:11:21.286873  389201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:11:21.286987  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:11:21.287090  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.304282  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.395931  389201 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:11:21.399160  389201 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1204 23:11:21.399199  389201 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1204 23:11:21.399213  389201 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1204 23:11:21.399225  389201 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1204 23:11:21.399238  389201 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-381016/.minikube/addons for local assets ...
	I1204 23:11:21.399311  389201 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-381016/.minikube/files for local assets ...
	I1204 23:11:21.399355  389201 start.go:296] duration metric: took 112.489476ms for postStartSetup
	I1204 23:11:21.399706  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:21.416048  389201 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json ...
	I1204 23:11:21.416313  389201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:11:21.416373  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.433021  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.523629  389201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1204 23:11:21.527955  389201 start.go:128] duration metric: took 13.445851769s to createHost
	I1204 23:11:21.527994  389201 start.go:83] releasing machines lock for "addons-630093", held for 13.446010021s
	I1204 23:11:21.528078  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:21.544604  389201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:11:21.544635  389201 ssh_runner.go:195] Run: cat /version.json
	I1204 23:11:21.544698  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.544711  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.562063  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.563107  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.726911  389201 ssh_runner.go:195] Run: systemctl --version
	I1204 23:11:21.731218  389201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:11:21.869255  389201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 23:11:21.873644  389201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:11:21.892231  389201 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1204 23:11:21.892324  389201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:11:21.918534  389201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1204 23:11:21.918567  389201 start.go:495] detecting cgroup driver to use...
	I1204 23:11:21.918609  389201 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1204 23:11:21.918738  389201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:11:21.932783  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:11:21.942996  389201 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:11:21.943047  389201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:11:21.955543  389201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:11:21.968274  389201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:11:22.038339  389201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:11:22.105989  389201 docker.go:233] disabling docker service ...
	I1204 23:11:22.106057  389201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:11:22.125303  389201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:11:22.136595  389201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:11:22.222266  389201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:11:22.302782  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:11:22.313850  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:11:22.329072  389201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:11:22.329153  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.338774  389201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:11:22.338845  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.348617  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.358293  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.368200  389201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:11:22.377304  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.386913  389201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.402803  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.412320  389201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:11:22.420685  389201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:11:22.428658  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:22.500255  389201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:11:22.610956  389201 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:11:22.611044  389201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:11:22.614513  389201 start.go:563] Will wait 60s for crictl version
	I1204 23:11:22.614575  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:11:22.617917  389201 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:11:22.653283  389201 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1204 23:11:22.653370  389201 ssh_runner.go:195] Run: crio --version
	I1204 23:11:22.690618  389201 ssh_runner.go:195] Run: crio --version
	I1204 23:11:22.727703  389201 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1204 23:11:22.729320  389201 cli_runner.go:164] Run: docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:11:22.746518  389201 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1204 23:11:22.750432  389201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:11:22.761195  389201 kubeadm.go:883] updating cluster {Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:11:22.761320  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:22.761379  389201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:11:22.829323  389201 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:11:22.829348  389201 crio.go:433] Images already preloaded, skipping extraction
	I1204 23:11:22.829393  389201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:11:22.862169  389201 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:11:22.862194  389201 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:11:22.862203  389201 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1204 23:11:22.862323  389201 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-630093 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:11:22.862387  389201 ssh_runner.go:195] Run: crio config
	I1204 23:11:22.906710  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:11:22.906743  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:11:22.906760  389201 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:11:22.906791  389201 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-630093 NodeName:addons-630093 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:11:22.906954  389201 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-630093"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:11:22.907084  389201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:11:22.916048  389201 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:11:22.916128  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 23:11:22.924791  389201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1204 23:11:22.942166  389201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:11:22.959356  389201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1204 23:11:22.976793  389201 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1204 23:11:22.980197  389201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:11:22.990601  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:23.062561  389201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:11:23.075015  389201 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093 for IP: 192.168.49.2
	I1204 23:11:23.075040  389201 certs.go:194] generating shared ca certs ...
	I1204 23:11:23.075059  389201 certs.go:226] acquiring lock for ca certs: {Name:mk50fab2a60ec4c58718c6f0f51391a1fd27b49a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.075181  389201 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key
	I1204 23:11:23.204545  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt ...
	I1204 23:11:23.204578  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt: {Name:mkc915739630db1af592b52d8db74ccdd723c7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.204795  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key ...
	I1204 23:11:23.204810  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key: {Name:mk98e76db05ffadd20917a2d52b7c5260ba39b61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.204916  389201 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key
	I1204 23:11:23.290846  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt ...
	I1204 23:11:23.290885  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt: {Name:mkde85a69cd8a6277fae67df41cc397c773bd1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.291129  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key ...
	I1204 23:11:23.291148  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key: {Name:mk4d177cf9edd13c7ad0b568d9086767e339e8d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.291277  389201 certs.go:256] generating profile certs ...
	I1204 23:11:23.291366  389201 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key
	I1204 23:11:23.291400  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt with IP's: []
	I1204 23:11:23.499855  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt ...
	I1204 23:11:23.499895  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: {Name:mk9311f602c7b1a2b44c19176448b2aa4b32b7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.500105  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key ...
	I1204 23:11:23.500123  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key: {Name:mk9ddfb2303f17ccf88a6e5b8c00cffba1cd1a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.500223  389201 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548
	I1204 23:11:23.500249  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1204 23:11:23.788463  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 ...
	I1204 23:11:23.788500  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548: {Name:mk43ba65c92ad4331db8d9847c5ef32165302741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.788694  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548 ...
	I1204 23:11:23.788714  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548: {Name:mkaced9e8196936ffe141d4dc3e6adda91a33533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.788818  389201 certs.go:381] copying /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 -> /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt
	I1204 23:11:23.788916  389201 certs.go:385] copying /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548 -> /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key
	I1204 23:11:23.788997  389201 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key
	I1204 23:11:23.789023  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt with IP's: []
	I1204 23:11:24.148068  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt ...
	I1204 23:11:24.148104  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt: {Name:mk0ee13602067d1cc858c9534a9707d295b361ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:24.148309  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key ...
	I1204 23:11:24.148327  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key: {Name:mk0ba88937bb7ca6e51a8cf0c8d2ef8507f0374f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:24.148532  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 23:11:24.148585  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:11:24.148628  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:11:24.148673  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem (1679 bytes)
	I1204 23:11:24.149367  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:11:24.173224  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:11:24.196229  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:11:24.219088  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:11:24.242335  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 23:11:24.265632  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:11:24.288555  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:11:24.311820  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 23:11:24.334208  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:11:24.356395  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:11:24.373538  389201 ssh_runner.go:195] Run: openssl version
	I1204 23:11:24.378816  389201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:11:24.388861  389201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.392560  389201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:11 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.392635  389201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.399222  389201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:11:24.408373  389201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:11:24.411765  389201 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:11:24.411828  389201 kubeadm.go:392] StartCluster: {Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:11:24.411930  389201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:11:24.412006  389201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:11:24.445620  389201 cri.go:89] found id: ""
	I1204 23:11:24.445692  389201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:11:24.454281  389201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:11:24.462658  389201 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1204 23:11:24.462715  389201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:11:24.471058  389201 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:11:24.471082  389201 kubeadm.go:157] found existing configuration files:
	
	I1204 23:11:24.471133  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:11:24.479379  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:11:24.479446  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:11:24.488299  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:11:24.496565  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:11:24.496635  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:11:24.505412  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:11:24.514190  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:11:24.514243  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:11:24.522477  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:11:24.531365  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:11:24.531421  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:11:24.539416  389201 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1204 23:11:24.592567  389201 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1204 23:11:24.645179  389201 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:11:33.426336  389201 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:11:33.426437  389201 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:11:33.426522  389201 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1204 23:11:33.426572  389201 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1204 23:11:33.426602  389201 kubeadm.go:310] OS: Linux
	I1204 23:11:33.426679  389201 kubeadm.go:310] CGROUPS_CPU: enabled
	I1204 23:11:33.426720  389201 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1204 23:11:33.426798  389201 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1204 23:11:33.426877  389201 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1204 23:11:33.426958  389201 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1204 23:11:33.427034  389201 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1204 23:11:33.427111  389201 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1204 23:11:33.427182  389201 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1204 23:11:33.427243  389201 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1204 23:11:33.427304  389201 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:11:33.427436  389201 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:11:33.427575  389201 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:11:33.427676  389201 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:11:33.429670  389201 out.go:235]   - Generating certificates and keys ...
	I1204 23:11:33.429776  389201 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:11:33.429879  389201 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:11:33.429944  389201 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:11:33.429996  389201 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:11:33.430058  389201 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:11:33.430106  389201 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:11:33.430157  389201 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:11:33.430253  389201 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-630093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1204 23:11:33.430323  389201 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:11:33.430455  389201 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-630093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1204 23:11:33.430550  389201 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:11:33.430624  389201 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:11:33.430694  389201 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:11:33.430742  389201 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:11:33.430787  389201 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:11:33.430873  389201 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:11:33.430954  389201 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:11:33.431013  389201 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:11:33.431063  389201 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:11:33.431131  389201 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:11:33.431189  389201 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:11:33.432586  389201 out.go:235]   - Booting up control plane ...
	I1204 23:11:33.432667  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:11:33.432728  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:11:33.432786  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:11:33.432889  389201 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:11:33.433004  389201 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:11:33.433088  389201 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:11:33.433245  389201 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:11:33.433395  389201 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:11:33.433490  389201 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.66305ms
	I1204 23:11:33.433586  389201 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:11:33.433659  389201 kubeadm.go:310] [api-check] The API server is healthy after 4.001728957s
	I1204 23:11:33.433784  389201 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:11:33.433892  389201 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:11:33.433961  389201 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:11:33.434106  389201 kubeadm.go:310] [mark-control-plane] Marking the node addons-630093 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:11:33.434165  389201 kubeadm.go:310] [bootstrap-token] Using token: 6qxarj.88k5pjf3ytyfzen4
	I1204 23:11:33.435845  389201 out.go:235]   - Configuring RBAC rules ...
	I1204 23:11:33.435945  389201 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:11:33.436019  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:11:33.436136  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:11:33.436246  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:11:33.436351  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:11:33.436423  389201 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:11:33.436515  389201 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:11:33.436552  389201 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:11:33.436626  389201 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:11:33.436642  389201 kubeadm.go:310] 
	I1204 23:11:33.436722  389201 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:11:33.436737  389201 kubeadm.go:310] 
	I1204 23:11:33.436836  389201 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:11:33.436844  389201 kubeadm.go:310] 
	I1204 23:11:33.436864  389201 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:11:33.436913  389201 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:11:33.436961  389201 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:11:33.436967  389201 kubeadm.go:310] 
	I1204 23:11:33.437008  389201 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:11:33.437016  389201 kubeadm.go:310] 
	I1204 23:11:33.437056  389201 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:11:33.437062  389201 kubeadm.go:310] 
	I1204 23:11:33.437107  389201 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:11:33.437170  389201 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:11:33.437258  389201 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:11:33.437274  389201 kubeadm.go:310] 
	I1204 23:11:33.437411  389201 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:11:33.437541  389201 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:11:33.437553  389201 kubeadm.go:310] 
	I1204 23:11:33.437672  389201 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6qxarj.88k5pjf3ytyfzen4 \
	I1204 23:11:33.437797  389201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e2721502eca5fe8af4d77f137e4406b90f31d1565f7dd87db91cf7b9fa1e9057 \
	I1204 23:11:33.437833  389201 kubeadm.go:310] 	--control-plane 
	I1204 23:11:33.437842  389201 kubeadm.go:310] 
	I1204 23:11:33.437945  389201 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:11:33.437954  389201 kubeadm.go:310] 
	I1204 23:11:33.438055  389201 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6qxarj.88k5pjf3ytyfzen4 \
	I1204 23:11:33.438195  389201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e2721502eca5fe8af4d77f137e4406b90f31d1565f7dd87db91cf7b9fa1e9057 
	I1204 23:11:33.438211  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:11:33.438221  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:11:33.439987  389201 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 23:11:33.441251  389201 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 23:11:33.445237  389201 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 23:11:33.445258  389201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 23:11:33.462279  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 23:11:33.665861  389201 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:11:33.665944  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:33.665972  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-630093 minikube.k8s.io/updated_at=2024_12_04T23_11_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=addons-630093 minikube.k8s.io/primary=true
	I1204 23:11:33.673805  389201 ops.go:34] apiserver oom_adj: -16
	I1204 23:11:33.756672  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:34.256804  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:34.757586  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:35.256809  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:35.757274  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:36.256932  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:36.757774  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:37.257415  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:37.756756  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:38.256823  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:38.333806  389201 kubeadm.go:1113] duration metric: took 4.667934536s to wait for elevateKubeSystemPrivileges
	I1204 23:11:38.333851  389201 kubeadm.go:394] duration metric: took 13.922029737s to StartCluster
	I1204 23:11:38.333875  389201 settings.go:142] acquiring lock: {Name:mke2b5bd7468e0e3a170be0f2243b433cdca2b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:38.334020  389201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:11:38.334556  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/kubeconfig: {Name:mk53a4e908644f8dfb244bee65db94736a5dc52e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:38.334826  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:11:38.334847  389201 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:38.334940  389201 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1204 23:11:38.335050  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:38.335067  389201 addons.go:69] Setting yakd=true in profile "addons-630093"
	I1204 23:11:38.335086  389201 addons.go:234] Setting addon yakd=true in "addons-630093"
	I1204 23:11:38.335088  389201 addons.go:69] Setting inspektor-gadget=true in profile "addons-630093"
	I1204 23:11:38.335099  389201 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-630093"
	I1204 23:11:38.335108  389201 addons.go:69] Setting gcp-auth=true in profile "addons-630093"
	I1204 23:11:38.335116  389201 addons.go:234] Setting addon inspektor-gadget=true in "addons-630093"
	I1204 23:11:38.335118  389201 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-630093"
	I1204 23:11:38.335126  389201 mustload.go:65] Loading cluster: addons-630093
	I1204 23:11:38.335120  389201 addons.go:69] Setting storage-provisioner=true in profile "addons-630093"
	I1204 23:11:38.335142  389201 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-630093"
	I1204 23:11:38.335151  389201 addons.go:234] Setting addon storage-provisioner=true in "addons-630093"
	I1204 23:11:38.335142  389201 addons.go:69] Setting ingress=true in profile "addons-630093"
	I1204 23:11:38.335165  389201 addons.go:69] Setting ingress-dns=true in profile "addons-630093"
	I1204 23:11:38.335168  389201 addons.go:234] Setting addon ingress=true in "addons-630093"
	I1204 23:11:38.335170  389201 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-630093"
	I1204 23:11:38.335177  389201 addons.go:234] Setting addon ingress-dns=true in "addons-630093"
	I1204 23:11:38.335175  389201 addons.go:69] Setting metrics-server=true in profile "addons-630093"
	I1204 23:11:38.335186  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335187  389201 addons.go:234] Setting addon metrics-server=true in "addons-630093"
	I1204 23:11:38.335201  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335205  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335251  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335270  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:38.335598  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335639  389201 addons.go:69] Setting registry=true in profile "addons-630093"
	I1204 23:11:38.335664  389201 addons.go:234] Setting addon registry=true in "addons-630093"
	I1204 23:11:38.335690  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335770  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335788  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335788  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335799  389201 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-630093"
	I1204 23:11:38.335865  389201 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-630093"
	I1204 23:11:38.335890  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.336127  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.336356  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335154  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335131  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.337395  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335166  389201 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-630093"
	I1204 23:11:38.337522  389201 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-630093"
	I1204 23:11:38.335779  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.337583  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335154  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335618  389201 addons.go:69] Setting volcano=true in profile "addons-630093"
	I1204 23:11:38.337980  389201 addons.go:234] Setting addon volcano=true in "addons-630093"
	I1204 23:11:38.338050  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.338346  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.338511  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.338659  389201 out.go:177] * Verifying Kubernetes components...
	I1204 23:11:38.338743  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335079  389201 addons.go:69] Setting cloud-spanner=true in profile "addons-630093"
	I1204 23:11:38.339343  389201 addons.go:234] Setting addon cloud-spanner=true in "addons-630093"
	I1204 23:11:38.339416  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.342329  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.343246  389201 addons.go:69] Setting default-storageclass=true in profile "addons-630093"
	I1204 23:11:38.343284  389201 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-630093"
	I1204 23:11:38.343690  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.343795  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:38.335605  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335627  389201 addons.go:69] Setting volumesnapshots=true in profile "addons-630093"
	I1204 23:11:38.344127  389201 addons.go:234] Setting addon volumesnapshots=true in "addons-630093"
	I1204 23:11:38.344187  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.369102  389201 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1204 23:11:38.370392  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 23:11:38.370441  389201 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 23:11:38.370514  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.375367  389201 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1204 23:11:38.376764  389201 out.go:177]   - Using image docker.io/registry:2.8.3
	I1204 23:11:38.378315  389201 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1204 23:11:38.378339  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1204 23:11:38.378415  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.387789  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.390443  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.396264  389201 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1204 23:11:38.397739  389201 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:11:38.397765  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1204 23:11:38.397836  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.403885  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1204 23:11:38.404091  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.406664  389201 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1204 23:11:38.407794  389201 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1204 23:11:38.409084  389201 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:11:38.413429  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1204 23:11:38.413459  389201 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1204 23:11:38.413462  389201 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1204 23:11:38.413531  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.413533  389201 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:11:38.413544  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1204 23:11:38.413597  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.413711  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1204 23:11:38.413833  389201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:11:38.413845  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:11:38.413897  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.414878  389201 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:11:38.414894  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1204 23:11:38.414957  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.416261  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1204 23:11:38.418117  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1204 23:11:38.419304  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1204 23:11:38.420751  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1204 23:11:38.422006  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1204 23:11:38.423748  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1204 23:11:38.424837  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1204 23:11:38.424860  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1204 23:11:38.424941  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.430181  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1204 23:11:38.434134  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1204 23:11:38.434699  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:38.435845  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1204 23:11:38.435868  389201 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1204 23:11:38.435951  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.438678  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:38.444191  389201 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:11:38.444221  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1204 23:11:38.444288  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.451026  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.452847  389201 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1204 23:11:38.454187  389201 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1204 23:11:38.454245  389201 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1204 23:11:38.454263  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1204 23:11:38.454326  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.455564  389201 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1204 23:11:38.455600  389201 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1204 23:11:38.455669  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	W1204 23:11:38.458222  389201 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1204 23:11:38.462209  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.470069  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.470586  389201 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-630093"
	I1204 23:11:38.470686  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.471216  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.473482  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.476209  389201 addons.go:234] Setting addon default-storageclass=true in "addons-630093"
	I1204 23:11:38.476266  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.476733  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.477420  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.486737  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.488076  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.494091  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.494760  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.500157  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.514409  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.517053  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.526764  389201 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1204 23:11:38.528218  389201 out.go:177]   - Using image docker.io/busybox:stable
	I1204 23:11:38.529542  389201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:11:38.529568  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1204 23:11:38.529635  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.532873  389201 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:11:38.532892  389201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:11:38.532949  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.547794  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.550902  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.714491  389201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:11:38.714590  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:11:38.730697  389201 node_ready.go:35] waiting up to 6m0s for node "addons-630093" to be "Ready" ...
	I1204 23:11:38.896083  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 23:11:38.896129  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1204 23:11:38.902650  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:11:38.903274  389201 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1204 23:11:38.903334  389201 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1204 23:11:38.908154  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:11:38.995367  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:11:38.996682  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:11:39.003953  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1204 23:11:39.003987  389201 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1204 23:11:39.009058  389201 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:11:39.009092  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1204 23:11:39.011952  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:11:39.015960  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1204 23:11:39.015992  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1204 23:11:39.095325  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1204 23:11:39.099215  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:11:39.107754  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 23:11:39.107787  389201 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 23:11:39.111656  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:11:39.199729  389201 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:11:39.199775  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1204 23:11:39.206060  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1204 23:11:39.206157  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1204 23:11:39.207660  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:11:39.313681  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:11:39.313712  389201 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 23:11:39.315754  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1204 23:11:39.315836  389201 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1204 23:11:39.402197  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1204 23:11:39.402298  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1204 23:11:39.497285  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:11:39.613001  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:11:39.795499  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1204 23:11:39.795537  389201 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1204 23:11:39.908631  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1204 23:11:39.908730  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1204 23:11:40.110384  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1204 23:11:40.110490  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1204 23:11:40.203583  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1204 23:11:40.203684  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1204 23:11:40.302900  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:11:40.302989  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1204 23:11:40.305736  389201 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.591107897s)
	I1204 23:11:40.305865  389201 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1204 23:11:40.415986  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.513233503s)
	I1204 23:11:40.516873  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1204 23:11:40.516909  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1204 23:11:40.606740  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1204 23:11:40.606836  389201 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1204 23:11:40.706038  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:11:41.013840  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.105639169s)
	I1204 23:11:41.019324  389201 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-630093" context rescaled to 1 replicas
	I1204 23:11:41.019970  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:41.098870  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1204 23:11:41.098907  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1204 23:11:41.103755  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.108338868s)
	I1204 23:11:41.296521  389201 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:41.296620  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1204 23:11:41.604186  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1204 23:11:41.604271  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1204 23:11:41.711584  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:41.895283  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1204 23:11:41.895375  389201 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1204 23:11:42.005218  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1204 23:11:42.005322  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1204 23:11:42.196571  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1204 23:11:42.196687  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1204 23:11:42.209452  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.212725161s)
	I1204 23:11:42.322610  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:11:42.322752  389201 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1204 23:11:42.502862  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:11:42.809979  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.797973312s)
	I1204 23:11:42.810142  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.714779141s)
	I1204 23:11:43.015142  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.91582183s)
	I1204 23:11:43.300319  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:44.520283  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.40857896s)
	I1204 23:11:44.520372  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.02299016s)
	I1204 23:11:44.520392  389201 addons.go:475] Verifying addon ingress=true in "addons-630093"
	I1204 23:11:44.520419  389201 addons.go:475] Verifying addon registry=true in "addons-630093"
	I1204 23:11:44.520330  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.312579258s)
	I1204 23:11:44.520780  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.814712029s)
	I1204 23:11:44.520741  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.907702215s)
	I1204 23:11:44.521986  389201 addons.go:475] Verifying addon metrics-server=true in "addons-630093"
	I1204 23:11:44.522358  389201 out.go:177] * Verifying ingress addon...
	I1204 23:11:44.522391  389201 out.go:177] * Verifying registry addon...
	I1204 23:11:44.523305  389201 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-630093 service yakd-dashboard -n yakd-dashboard
	
	I1204 23:11:44.525119  389201 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1204 23:11:44.525119  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1204 23:11:44.600633  389201 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:11:44.600664  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:44.600855  389201 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1204 23:11:44.600872  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.030335  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:45.031111  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.524701  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.813019436s)
	W1204 23:11:45.524761  389201 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:11:45.524790  389201 retry.go:31] will retry after 181.865687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:11:45.529400  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:45.529925  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.620284  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1204 23:11:45.620363  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:45.640586  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:45.707473  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:45.802964  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:45.916555  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1204 23:11:45.999202  389201 addons.go:234] Setting addon gcp-auth=true in "addons-630093"
	I1204 23:11:45.999264  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:45.999784  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:46.028530  389201 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1204 23:11:46.028595  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:46.031316  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:46.031818  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:46.049437  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:46.408520  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.905505829s)
	I1204 23:11:46.408572  389201 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-630093"
	I1204 23:11:46.410390  389201 out.go:177] * Verifying csi-hostpath-driver addon...
	I1204 23:11:46.413226  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1204 23:11:46.423132  389201 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:11:46.423158  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:46.530521  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:46.530917  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:46.918004  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:47.028913  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:47.029388  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:47.417466  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:47.531801  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:47.532309  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:47.916654  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:48.028517  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:48.029048  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:48.236314  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:48.416588  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:48.528958  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:48.529570  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:48.735256  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.027721867s)
	I1204 23:11:48.735290  389201 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.706722291s)
	I1204 23:11:48.737269  389201 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1204 23:11:48.738737  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:48.739945  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1204 23:11:48.739962  389201 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1204 23:11:48.757606  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1204 23:11:48.757640  389201 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1204 23:11:48.774462  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:11:48.774491  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1204 23:11:48.791359  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:11:48.917479  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:49.028378  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:49.028791  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:49.119035  389201 addons.go:475] Verifying addon gcp-auth=true in "addons-630093"
	I1204 23:11:49.120662  389201 out.go:177] * Verifying gcp-auth addon...
	I1204 23:11:49.123168  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1204 23:11:49.127558  389201 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1204 23:11:49.127594  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:49.417311  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:49.529241  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:49.529771  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:49.626790  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:49.917626  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:50.028348  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:50.028726  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:50.128054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:50.417233  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:50.529158  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:50.529580  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:50.627050  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:50.734676  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:50.917259  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:51.029147  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:51.029767  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:51.126874  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:51.417238  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:51.529239  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:51.529661  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:51.627160  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:51.916950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:52.028762  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:52.029207  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:52.127128  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:52.417313  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:52.529136  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:52.529632  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:52.626885  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:52.917040  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:53.028643  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:53.029069  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:53.126271  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:53.233877  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:53.417285  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:53.529030  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:53.529451  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:53.626877  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:53.917489  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:54.029327  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:54.029771  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:54.127217  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:54.416734  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:54.528697  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:54.529051  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:54.626826  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:54.916888  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:55.028438  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:55.028959  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:55.126396  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:55.234291  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:55.417202  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:55.528962  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:55.529441  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:55.626790  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:55.917367  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:56.028910  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:56.029339  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:56.127003  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:56.416550  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:56.528268  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:56.528637  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:56.626903  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:56.917742  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:57.028644  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:57.029259  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:57.126655  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:57.417402  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:57.528943  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:57.529266  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:57.626610  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:57.802859  389201 node_ready.go:49] node "addons-630093" has status "Ready":"True"
	I1204 23:11:57.802968  389201 node_ready.go:38] duration metric: took 19.072220894s for node "addons-630093" to be "Ready" ...
	I1204 23:11:57.803001  389201 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:11:57.812284  389201 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace to be "Ready" ...
	I1204 23:11:57.918256  389201 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:11:57.918288  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:58.028987  389201 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:11:58.029025  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:58.029163  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:58.128052  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:58.418190  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:58.529517  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:58.529923  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:58.627312  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:58.919346  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:59.029950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:59.030369  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:59.127570  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:59.418251  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:59.530785  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:59.531584  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:59.630759  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:59.818327  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:11:59.918676  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:00.030531  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:00.030960  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:00.127203  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:00.418498  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:00.529214  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:00.529347  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:00.626705  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:00.919036  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:01.029541  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:01.029735  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:01.127079  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:01.417804  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:01.529706  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:01.530306  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:01.626425  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:01.818875  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:01.918913  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:02.029895  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:02.030382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:02.127260  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:02.423666  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:02.529870  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:02.530595  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:02.627705  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:02.918184  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:03.096822  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:03.098279  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:03.126704  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:03.418293  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:03.530189  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:03.531307  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:03.626994  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:03.819175  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:03.919019  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:04.029490  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:04.030689  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:04.127527  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:04.418611  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:04.529829  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:04.530049  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:04.627138  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:04.918884  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:05.029547  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:05.030544  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:05.127501  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:05.418586  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:05.529727  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:05.530098  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:05.629968  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:05.819250  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:05.917895  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:06.030341  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:06.030532  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:06.130159  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:06.417534  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:06.529640  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:06.529905  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:06.626512  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:06.918521  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:07.029270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:07.029688  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:07.127053  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:07.417502  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:07.529692  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:07.530328  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:07.629361  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:07.917534  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:08.029222  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:08.029469  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:08.127082  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:08.319034  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:08.419261  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:08.529942  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:08.530672  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:08.627267  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:08.917968  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:09.029951  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:09.030163  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:09.126878  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:09.418269  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:09.529306  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:09.529537  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:09.627199  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:09.918335  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:10.029495  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:10.029837  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:10.127443  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:10.319436  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:10.418755  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:10.529622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:10.529807  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:10.626252  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:10.917779  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:11.030059  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:11.030182  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:11.127180  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:11.419556  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:11.530723  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:11.531122  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:11.626618  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:11.918234  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:12.029550  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:12.029678  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:12.127740  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:12.418986  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:12.530019  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:12.530137  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:12.630114  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:12.819093  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:12.918200  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:13.029270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:13.029507  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:13.127361  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:13.418296  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:13.528977  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:13.529560  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:13.629701  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:13.918107  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:14.028623  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:14.029060  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:14.126995  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:14.417833  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:14.601066  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:14.601685  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:14.700398  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:14.819539  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:14.918753  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:15.029149  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:15.029311  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:15.127355  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:15.417956  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:15.530046  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:15.530173  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:15.626804  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:15.817465  389201 pod_ready.go:93] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.817493  389201 pod_ready.go:82] duration metric: took 18.005165509s for pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.817504  389201 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.822063  389201 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.822085  389201 pod_ready.go:82] duration metric: took 4.574786ms for pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.822105  389201 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.826436  389201 pod_ready.go:93] pod "etcd-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.826459  389201 pod_ready.go:82] duration metric: took 4.348229ms for pod "etcd-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.826472  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.831213  389201 pod_ready.go:93] pod "kube-apiserver-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.831241  389201 pod_ready.go:82] duration metric: took 4.762165ms for pod "kube-apiserver-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.831254  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.835452  389201 pod_ready.go:93] pod "kube-controller-manager-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.835474  389201 pod_ready.go:82] duration metric: took 4.212413ms for pod "kube-controller-manager-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.835486  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9l4p" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.918128  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:16.028729  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:16.029367  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:16.127315  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:16.216237  389201 pod_ready.go:93] pod "kube-proxy-k9l4p" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:16.216263  389201 pod_ready.go:82] duration metric: took 380.769812ms for pod "kube-proxy-k9l4p" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.216274  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.417739  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:16.529747  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:16.530393  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:16.615744  389201 pod_ready.go:93] pod "kube-scheduler-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:16.615777  389201 pod_ready.go:82] duration metric: took 399.4948ms for pod "kube-scheduler-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.615792  389201 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.629644  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:16.918480  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:17.029640  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:17.030079  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:17.127575  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:17.418114  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:17.528932  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:17.530075  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:17.704033  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:17.998609  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:18.099865  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:18.100201  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:18.197667  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:18.418883  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:18.599572  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:18.600671  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:18.701570  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:18.703573  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:18.920015  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:19.100730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:19.102395  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:19.198834  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:19.418509  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:19.529727  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:19.530383  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:19.626273  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:19.918805  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:20.029240  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:20.029932  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:20.126903  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:20.418249  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:20.529801  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:20.530308  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:20.626097  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:20.918878  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:21.029289  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:21.029519  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:21.122606  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:21.126039  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:21.418484  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:21.529710  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:21.530710  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:21.626146  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:21.918962  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:22.029458  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:22.029740  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:22.127214  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:22.419474  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:22.530071  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:22.530666  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:22.626757  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:22.919558  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:23.030183  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:23.030603  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:23.126737  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:23.419160  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:23.530176  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:23.530357  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:23.622846  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:23.626203  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:23.918700  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:24.028728  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:24.028982  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:24.126654  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:24.417980  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:24.530135  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:24.531100  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:24.627054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:24.918427  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:25.028887  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:25.029218  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:25.126097  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:25.418781  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:25.529648  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:25.529792  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:25.625375  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:25.918175  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:26.029449  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:26.029717  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:26.121949  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:26.125965  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:26.418478  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:26.529251  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:26.529458  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:26.626865  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:26.918569  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:27.029067  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:27.030277  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:27.125626  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:27.418385  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:27.528662  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:27.529405  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:27.628474  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:27.917874  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:28.029704  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:28.029928  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:28.122056  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:28.126396  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:28.419714  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:28.529079  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:28.529300  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:28.628622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:28.918659  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:29.028740  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:29.029352  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:29.126050  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:29.417959  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:29.529472  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:29.530620  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:29.629092  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:29.919400  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:30.030302  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:30.030514  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:30.122668  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:30.126280  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:30.418540  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:30.529288  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:30.529642  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:30.626549  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:30.918094  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:31.028726  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:31.029185  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:31.127032  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:31.418917  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:31.529225  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:31.529895  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:31.626376  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:31.917674  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:32.029127  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:32.029446  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:32.126980  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:32.418178  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:32.529226  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:32.529801  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:32.622787  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:32.629901  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:32.918843  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:33.029651  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:33.029732  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:33.126752  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:33.417866  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:33.529615  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:33.529803  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:33.626861  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:33.918296  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:34.029295  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:34.029827  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:34.126281  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:34.418699  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:34.529505  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:34.529651  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:34.642845  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.016246  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:35.029633  389201 kapi.go:107] duration metric: took 50.504509788s to wait for kubernetes.io/minikube-addons=registry ...
	I1204 23:12:35.030572  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:35.122008  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:35.126344  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.418953  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:35.529492  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:35.629301  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.917990  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:36.029160  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:36.126923  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:36.418071  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:36.530620  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:36.626415  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:36.918072  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:37.030355  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:37.122395  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:37.130220  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:37.418413  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:37.528927  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:37.625990  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:37.918227  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:38.029187  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:38.126369  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:38.417932  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:38.598800  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:38.697192  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:38.919507  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:39.029934  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:39.126608  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:39.417800  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:39.529782  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:39.621784  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:39.626154  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:39.918849  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:40.030159  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:40.126095  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:40.418225  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:40.531480  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:40.626066  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:40.922455  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:41.030073  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:41.132353  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:41.419213  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:41.530198  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:41.623990  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:41.626185  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:41.918285  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:42.029080  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:42.126525  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:42.417894  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:42.530073  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:42.628888  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:42.917931  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:43.029806  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:43.129456  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:43.417942  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:43.530219  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:43.626382  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:43.919862  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:44.030101  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:44.121891  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:44.126376  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:44.418428  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:44.529385  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:44.626961  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:44.918331  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:45.029815  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.130119  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:45.418987  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:45.530112  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.626679  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:45.917695  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.030308  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.122743  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:46.125898  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:46.418369  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.530377  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.626026  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:46.919590  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.029382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.126945  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:47.418103  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.529610  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.626586  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:47.918784  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.030793  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.123333  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:48.125995  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.418085  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.529161  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.625851  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.918833  389201 kapi.go:107] duration metric: took 1m2.505604843s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1204 23:12:49.029518  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.126520  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:49.529429  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.626178  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.028779  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.126359  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.529535  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.621344  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:50.626657  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.029711  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.126167  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.528977  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.625730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.029401  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.126687  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.529779  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.622444  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:52.626730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.029789  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.125660  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.529648  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.625950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.029567  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.126564  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.529619  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.626519  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.029917  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.121799  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:55.125909  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.530199  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.626324  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.029734  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.125940  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.529705  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.626054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.072272  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.122241  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:57.126623  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.529316  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.626270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.029340  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.126509  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.529559  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.626455  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.029135  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.126845  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.529933  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.621754  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:59.625881  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.029773  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.126622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.529528  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.626582  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.029576  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.127058  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.530191  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.622552  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:01.626939  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.030598  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.130438  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.529743  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.626141  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.030953  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.149927  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.529333  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.622858  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:03.626677  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:04.029338  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:04.128963  389201 kapi.go:107] duration metric: took 1m15.005791002s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1204 23:13:04.130952  389201 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-630093 cluster.
	I1204 23:13:04.132630  389201 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1204 23:13:04.134066  389201 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1204 23:13:04.599921  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.100341  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.599382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.623902  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:06.029904  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:06.529164  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.029826  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.531039  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.030122  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.123005  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:08.529214  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.029839  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.529349  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:10.030137  389201 kapi.go:107] duration metric: took 1m25.505015693s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1204 23:13:10.032415  389201 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1204 23:13:10.034021  389201 addons.go:510] duration metric: took 1m31.699072904s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1204 23:13:10.622508  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:13.121894  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:15.622516  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:18.122616  389201 pod_ready.go:93] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:18.122655  389201 pod_ready.go:82] duration metric: took 1m1.506852695s for pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.122671  389201 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.127666  389201 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:18.127689  389201 pod_ready.go:82] duration metric: took 5.009056ms for pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.127712  389201 pod_ready.go:39] duration metric: took 1m20.324660399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:13:18.127736  389201 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:13:18.127773  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:18.127852  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:18.163496  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:18.163523  389201 cri.go:89] found id: ""
	I1204 23:13:18.163535  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:18.163604  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.167359  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:18.167448  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:18.204556  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:18.204586  389201 cri.go:89] found id: ""
	I1204 23:13:18.204598  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:18.204666  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.208385  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:18.208480  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:18.243732  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:18.243758  389201 cri.go:89] found id: ""
	I1204 23:13:18.243766  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:18.243825  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.247475  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:18.247549  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:18.284446  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:18.284481  389201 cri.go:89] found id: ""
	I1204 23:13:18.284494  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:18.284553  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.288056  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:18.288154  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:18.322998  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:18.323035  389201 cri.go:89] found id: ""
	I1204 23:13:18.323071  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:18.323127  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.326560  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:18.326662  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:18.360672  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:18.360695  389201 cri.go:89] found id: ""
	I1204 23:13:18.360704  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:18.360759  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.364394  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:18.364465  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:18.398753  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:18.398779  389201 cri.go:89] found id: ""
	I1204 23:13:18.398788  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:18.398837  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.402272  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:18.402308  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:18.480499  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:18.480540  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:18.524595  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:18.524634  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:18.566986  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:18.567027  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:18.602070  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:18.602102  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:18.658618  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:18.658684  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:18.696622  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:18.696664  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:18.740640  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:18.740679  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:18.779439  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.779629  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.791512  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.791674  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.791800  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.791953  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792143  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792315  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792450  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792613  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792743  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792901  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.793033  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.793194  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.793332  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.793495  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:18.826225  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:18.826269  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:18.853723  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:18.853768  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:18.956948  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:18.956987  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:19.002234  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:19.002271  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:19.041497  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:19.041531  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:19.041595  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:19.041609  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:19.041619  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:19.041628  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:19.041636  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:19.041642  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:19.041649  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:19.041654  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:29.043089  389201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:13:29.058130  389201 api_server.go:72] duration metric: took 1m50.723247239s to wait for apiserver process to appear ...
	I1204 23:13:29.058169  389201 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:13:29.058217  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:29.058262  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:29.093177  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:29.093208  389201 cri.go:89] found id: ""
	I1204 23:13:29.093217  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:29.093301  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.096893  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:29.096964  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:29.132522  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:29.132544  389201 cri.go:89] found id: ""
	I1204 23:13:29.132554  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:29.132596  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.136114  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:29.136174  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:29.171816  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:29.171839  389201 cri.go:89] found id: ""
	I1204 23:13:29.171850  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:29.171897  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.175512  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:29.175584  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:29.212035  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:29.212060  389201 cri.go:89] found id: ""
	I1204 23:13:29.212069  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:29.212116  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.215601  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:29.215669  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:29.251281  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:29.251304  389201 cri.go:89] found id: ""
	I1204 23:13:29.251312  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:29.251358  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.255228  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:29.255342  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:29.290460  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:29.290486  389201 cri.go:89] found id: ""
	I1204 23:13:29.290496  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:29.290559  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.294114  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:29.294191  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:29.330311  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:29.330336  389201 cri.go:89] found id: ""
	I1204 23:13:29.330346  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:29.330396  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.333992  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:29.334023  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:29.368566  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:29.368596  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:29.402199  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:29.402229  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:29.482290  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:29.482339  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:29.510099  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:29.510142  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:29.615012  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:29.615047  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:29.660921  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:29.660962  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:29.704015  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:29.704060  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:29.747065  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:29.747100  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:29.827553  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.827776  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.839459  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.839672  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.839847  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840075  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.840275  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840505  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.840699  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840936  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.841134  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.841361  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.841560  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.841791  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.842000  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.842238  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:29.875377  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:29.875420  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:29.915909  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:29.915942  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:29.975760  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:29.975799  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:30.020004  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:30.020036  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:30.020104  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:30.020121  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:30.020132  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:30.020149  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:30.020164  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:30.020176  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:30.020187  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:30.020199  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:40.021029  389201 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1204 23:13:40.025015  389201 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1204 23:13:40.026016  389201 api_server.go:141] control plane version: v1.31.2
	I1204 23:13:40.026045  389201 api_server.go:131] duration metric: took 10.967868289s to wait for apiserver health ...
	I1204 23:13:40.026053  389201 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:13:40.026087  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:40.026139  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:40.061619  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:40.061656  389201 cri.go:89] found id: ""
	I1204 23:13:40.061667  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:40.061726  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.065276  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:40.065347  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:40.099762  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:40.099784  389201 cri.go:89] found id: ""
	I1204 23:13:40.099791  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:40.099846  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.103315  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:40.103376  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:40.138517  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:40.138548  389201 cri.go:89] found id: ""
	I1204 23:13:40.138558  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:40.138608  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.142278  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:40.142338  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:40.177139  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:40.177162  389201 cri.go:89] found id: ""
	I1204 23:13:40.177169  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:40.177224  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.180724  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:40.180787  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:40.215881  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:40.215909  389201 cri.go:89] found id: ""
	I1204 23:13:40.215921  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:40.215978  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.219605  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:40.219672  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:40.254791  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:40.254818  389201 cri.go:89] found id: ""
	I1204 23:13:40.254830  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:40.254883  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.258537  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:40.258600  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:40.293449  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:40.293476  389201 cri.go:89] found id: ""
	I1204 23:13:40.293486  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:40.293542  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.297150  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:40.297182  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:40.372794  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:40.372843  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:40.419461  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:40.419498  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:40.534097  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:40.534131  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:40.578901  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:40.578941  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:40.616890  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:40.616923  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:40.676313  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:40.676354  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:40.712137  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:40.712171  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:40.749253  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:40.749283  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:40.793451  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.793680  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805200  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.805392  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805575  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.805790  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805984  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.806212  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.806412  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.806670  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.806884  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807109  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.807303  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807526  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.807722  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807952  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:40.842035  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:40.842083  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:40.868911  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:40.868949  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:40.915327  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:40.915367  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:40.958116  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:40.958151  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:40.958253  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:40.958268  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.958278  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.958294  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.958308  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.958323  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:40.958329  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:40.958338  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:50.969322  389201 system_pods.go:59] 19 kube-system pods found
	I1204 23:13:50.969358  389201 system_pods.go:61] "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
	I1204 23:13:50.969363  389201 system_pods.go:61] "coredns-7c65d6cfc9-nvslc" [e12dda0f-2d10-4096-b12f-73bd871cc18e] Running
	I1204 23:13:50.969368  389201 system_pods.go:61] "csi-hostpath-attacher-0" [af4d7f93-4989-4c1d-8c89-43d0e74f1a44] Running
	I1204 23:13:50.969372  389201 system_pods.go:61] "csi-hostpath-resizer-0" [5198084f-6ce5-4b12-89f8-5d8a76057764] Running
	I1204 23:13:50.969375  389201 system_pods.go:61] "csi-hostpathplugin-97jlr" [1d17a273-85e7-4f77-9bbe-7786a88d0ebe] Running
	I1204 23:13:50.969379  389201 system_pods.go:61] "etcd-addons-630093" [7758ddc9-6dfb-4fe8-a37f-1ef8170cd720] Running
	I1204 23:13:50.969382  389201 system_pods.go:61] "kindnet-sklhp" [a2a719ef-fccf-456e-88ac-b6e5fad34e3e] Running
	I1204 23:13:50.969387  389201 system_pods.go:61] "kube-apiserver-addons-630093" [34402f18-4ebe-4e53-9495-549544e9f70c] Running
	I1204 23:13:50.969393  389201 system_pods.go:61] "kube-controller-manager-addons-630093" [e33f5809-04da-4fb0-8265-2e29e7f90e15] Running
	I1204 23:13:50.969408  389201 system_pods.go:61] "kube-ingress-dns-minikube" [4cda5680-90e6-43e2-b35f-bf0976f6fef3] Running
	I1204 23:13:50.969415  389201 system_pods.go:61] "kube-proxy-k9l4p" [bddbd74f-1a8f-4181-b2f7-decc74059f10] Running
	I1204 23:13:50.969420  389201 system_pods.go:61] "kube-scheduler-addons-630093" [1f496311-6985-4c79-a19a-4ceade68e41e] Running
	I1204 23:13:50.969429  389201 system_pods.go:61] "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
	I1204 23:13:50.969434  389201 system_pods.go:61] "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
	I1204 23:13:50.969441  389201 system_pods.go:61] "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
	I1204 23:13:50.969444  389201 system_pods.go:61] "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
	I1204 23:13:50.969453  389201 system_pods.go:61] "snapshot-controller-56fcc65765-2492d" [a604be0a-c061-4a65-9d32-0b98fff12222] Running
	I1204 23:13:50.969458  389201 system_pods.go:61] "snapshot-controller-56fcc65765-xtclh" [845fd71c-634d-41e2-a101-08a0c1458418] Running
	I1204 23:13:50.969461  389201 system_pods.go:61] "storage-provisioner" [cde6de53-e600-4898-a1c3-df78f4d4e6ff] Running
	I1204 23:13:50.969470  389201 system_pods.go:74] duration metric: took 10.943410983s to wait for pod list to return data ...
	I1204 23:13:50.969480  389201 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:13:50.972205  389201 default_sa.go:45] found service account: "default"
	I1204 23:13:50.972229  389201 default_sa.go:55] duration metric: took 2.740927ms for default service account to be created ...
	I1204 23:13:50.972237  389201 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:13:50.980831  389201 system_pods.go:86] 19 kube-system pods found
	I1204 23:13:50.980861  389201 system_pods.go:89] "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
	I1204 23:13:50.980867  389201 system_pods.go:89] "coredns-7c65d6cfc9-nvslc" [e12dda0f-2d10-4096-b12f-73bd871cc18e] Running
	I1204 23:13:50.980872  389201 system_pods.go:89] "csi-hostpath-attacher-0" [af4d7f93-4989-4c1d-8c89-43d0e74f1a44] Running
	I1204 23:13:50.980876  389201 system_pods.go:89] "csi-hostpath-resizer-0" [5198084f-6ce5-4b12-89f8-5d8a76057764] Running
	I1204 23:13:50.980880  389201 system_pods.go:89] "csi-hostpathplugin-97jlr" [1d17a273-85e7-4f77-9bbe-7786a88d0ebe] Running
	I1204 23:13:50.980883  389201 system_pods.go:89] "etcd-addons-630093" [7758ddc9-6dfb-4fe8-a37f-1ef8170cd720] Running
	I1204 23:13:50.980887  389201 system_pods.go:89] "kindnet-sklhp" [a2a719ef-fccf-456e-88ac-b6e5fad34e3e] Running
	I1204 23:13:50.980891  389201 system_pods.go:89] "kube-apiserver-addons-630093" [34402f18-4ebe-4e53-9495-549544e9f70c] Running
	I1204 23:13:50.980895  389201 system_pods.go:89] "kube-controller-manager-addons-630093" [e33f5809-04da-4fb0-8265-2e29e7f90e15] Running
	I1204 23:13:50.980899  389201 system_pods.go:89] "kube-ingress-dns-minikube" [4cda5680-90e6-43e2-b35f-bf0976f6fef3] Running
	I1204 23:13:50.980905  389201 system_pods.go:89] "kube-proxy-k9l4p" [bddbd74f-1a8f-4181-b2f7-decc74059f10] Running
	I1204 23:13:50.980910  389201 system_pods.go:89] "kube-scheduler-addons-630093" [1f496311-6985-4c79-a19a-4ceade68e41e] Running
	I1204 23:13:50.980914  389201 system_pods.go:89] "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
	I1204 23:13:50.980920  389201 system_pods.go:89] "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
	I1204 23:13:50.980926  389201 system_pods.go:89] "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
	I1204 23:13:50.980929  389201 system_pods.go:89] "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
	I1204 23:13:50.980933  389201 system_pods.go:89] "snapshot-controller-56fcc65765-2492d" [a604be0a-c061-4a65-9d32-0b98fff12222] Running
	I1204 23:13:50.980939  389201 system_pods.go:89] "snapshot-controller-56fcc65765-xtclh" [845fd71c-634d-41e2-a101-08a0c1458418] Running
	I1204 23:13:50.980943  389201 system_pods.go:89] "storage-provisioner" [cde6de53-e600-4898-a1c3-df78f4d4e6ff] Running
	I1204 23:13:50.980952  389201 system_pods.go:126] duration metric: took 8.709075ms to wait for k8s-apps to be running ...
	I1204 23:13:50.980961  389201 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:13:50.981009  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:13:50.992805  389201 system_svc.go:56] duration metric: took 11.832695ms WaitForService to wait for kubelet
	I1204 23:13:50.992839  389201 kubeadm.go:582] duration metric: took 2m12.65796392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:13:50.992860  389201 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:13:50.996391  389201 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1204 23:13:50.996430  389201 node_conditions.go:123] node cpu capacity is 8
	I1204 23:13:50.996447  389201 node_conditions.go:105] duration metric: took 3.580009ms to run NodePressure ...
	I1204 23:13:50.996463  389201 start.go:241] waiting for startup goroutines ...
	I1204 23:13:50.996483  389201 start.go:246] waiting for cluster config update ...
	I1204 23:13:50.996508  389201 start.go:255] writing updated cluster config ...
	I1204 23:13:50.996891  389201 ssh_runner.go:195] Run: rm -f paused
	I1204 23:13:51.048677  389201 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 23:13:51.051940  389201 out.go:177] * Done! kubectl is now configured to use "addons-630093" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 23:20:53 addons-630093 crio[1031]: time="2024-12-04 23:20:53.810913562Z" level=info msg="Image docker.io/nginx:latest not found" id=8b17d385-7f84-47ac-9af7-ff037df14126 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:20:53 addons-630093 crio[1031]: time="2024-12-04 23:20:53.811410788Z" level=info msg="Pulling image: docker.io/nginx:latest" id=02dceb4f-97b1-4c56-91ff-95f12d2126ac name=/runtime.v1.ImageService/PullImage
	Dec 04 23:20:53 addons-630093 crio[1031]: time="2024-12-04 23:20:53.817344224Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 04 23:21:00 addons-630093 crio[1031]: time="2024-12-04 23:21:00.811044880Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=4592b077-bea6-4a30-a27e-36f0920e6f41 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:00 addons-630093 crio[1031]: time="2024-12-04 23:21:00.811341179Z" level=info msg="Image docker.io/nginx:alpine not found" id=4592b077-bea6-4a30-a27e-36f0920e6f41 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:12 addons-630093 crio[1031]: time="2024-12-04 23:21:12.811066546Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=92ca8ae7-d18b-4a19-9a6a-b5d6071f032e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:12 addons-630093 crio[1031]: time="2024-12-04 23:21:12.811306443Z" level=info msg="Image docker.io/nginx:alpine not found" id=92ca8ae7-d18b-4a19-9a6a-b5d6071f032e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:27 addons-630093 crio[1031]: time="2024-12-04 23:21:27.811314739Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=abe39daf-eb7c-49ed-b02d-3a27e7acab35 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:27 addons-630093 crio[1031]: time="2024-12-04 23:21:27.811604897Z" level=info msg="Image docker.io/nginx:alpine not found" id=abe39daf-eb7c-49ed-b02d-3a27e7acab35 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:35 addons-630093 crio[1031]: time="2024-12-04 23:21:35.811630589Z" level=info msg="Checking image status: docker.io/nginx:latest" id=f451fab8-d8d3-4ac9-bedd-09f91c8c5896 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:35 addons-630093 crio[1031]: time="2024-12-04 23:21:35.811878696Z" level=info msg="Image docker.io/nginx:latest not found" id=f451fab8-d8d3-4ac9-bedd-09f91c8c5896 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:41 addons-630093 crio[1031]: time="2024-12-04 23:21:41.811465463Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=21c9425d-8ae0-44af-9e63-3a52b311d6c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:41 addons-630093 crio[1031]: time="2024-12-04 23:21:41.811733740Z" level=info msg="Image docker.io/nginx:alpine not found" id=21c9425d-8ae0-44af-9e63-3a52b311d6c5 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:50 addons-630093 crio[1031]: time="2024-12-04 23:21:50.811011732Z" level=info msg="Checking image status: docker.io/nginx:latest" id=694a10f1-30e2-4a41-abe9-8beda8e234b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:50 addons-630093 crio[1031]: time="2024-12-04 23:21:50.811302681Z" level=info msg="Image docker.io/nginx:latest not found" id=694a10f1-30e2-4a41-abe9-8beda8e234b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:56 addons-630093 crio[1031]: time="2024-12-04 23:21:56.810846510Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=e8173988-df81-4802-8384-26350c87bcb3 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:56 addons-630093 crio[1031]: time="2024-12-04 23:21:56.811080431Z" level=info msg="Image docker.io/nginx:alpine not found" id=e8173988-df81-4802-8384-26350c87bcb3 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:21:56 addons-630093 crio[1031]: time="2024-12-04 23:21:56.811619408Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=426c02df-39f1-4f02-9840-281d394f55be name=/runtime.v1.ImageService/PullImage
	Dec 04 23:21:56 addons-630093 crio[1031]: time="2024-12-04 23:21:56.815971512Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 04 23:22:03 addons-630093 crio[1031]: time="2024-12-04 23:22:03.810963284Z" level=info msg="Checking image status: docker.io/nginx:latest" id=173f8ad4-a567-4bee-b497-c029350e4ac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:22:03 addons-630093 crio[1031]: time="2024-12-04 23:22:03.811202532Z" level=info msg="Image docker.io/nginx:latest not found" id=173f8ad4-a567-4bee-b497-c029350e4ac8 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:22:14 addons-630093 crio[1031]: time="2024-12-04 23:22:14.811639100Z" level=info msg="Checking image status: docker.io/nginx:latest" id=c2239450-05f6-4a14-9f03-baf71b031559 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:22:14 addons-630093 crio[1031]: time="2024-12-04 23:22:14.811890663Z" level=info msg="Image docker.io/nginx:latest not found" id=c2239450-05f6-4a14-9f03-baf71b031559 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:22:25 addons-630093 crio[1031]: time="2024-12-04 23:22:25.810745642Z" level=info msg="Checking image status: docker.io/nginx:latest" id=0c19f8ad-cb42-44ba-ab05-a3445a1458cd name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:22:25 addons-630093 crio[1031]: time="2024-12-04 23:22:25.811037877Z" level=info msg="Image docker.io/nginx:latest not found" id=0c19f8ad-cb42-44ba-ab05-a3445a1458cd name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a92f917845840       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          8 minutes ago       Running             busybox                   0                   9101d3097d84d       busybox
	19a975e308aa0       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             9 minutes ago       Running             controller                0                   f7e4db205d4a2       ingress-nginx-controller-5f85ff4588-bjrmz
	d43b4e626d869       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   9 minutes ago       Exited              patch                     0                   1453371ecba6e       ingress-nginx-admission-patch-6klmq
	9cfd8f1d1fc9d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   9 minutes ago       Exited              create                    0                   6a2e4839790d0       ingress-nginx-admission-create-g9mgr
	34d29b45443cc       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns      0                   fe05a9e0f9e54       kube-ingress-dns-minikube
	1c628d0404971       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             10 minutes ago      Running             coredns                   0                   e5a18048ffd94       coredns-7c65d6cfc9-nvslc
	7579ef8738441       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             10 minutes ago      Running             storage-provisioner       0                   53117b6914cba       storage-provisioner
	f0e1e1197d418       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                           10 minutes ago      Running             kindnet-cni               0                   8e1077c9b19f2       kindnet-sklhp
	76b8a8033f246       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             10 minutes ago      Running             kube-proxy                0                   7b72d950d834d       kube-proxy-k9l4p
	f25ca8d234e67       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             10 minutes ago      Running             kube-scheduler            0                   6ecfaa8cbb0a8       kube-scheduler-addons-630093
	697a8666b9beb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             10 minutes ago      Running             kube-apiserver            0                   c5cc52570c5da       kube-apiserver-addons-630093
	249b17c70ce14       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             10 minutes ago      Running             etcd                      0                   5c544b67b37e6       etcd-addons-630093
	c18ad7ba7d7db       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             10 minutes ago      Running             kube-controller-manager   0                   2b2d046f58c6b       kube-controller-manager-addons-630093
	
	
	==> coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] <==
	[INFO] 10.244.0.13:36200 - 58124 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101425s
	[INFO] 10.244.0.13:43691 - 63611 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005338233s
	[INFO] 10.244.0.13:43691 - 63271 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005381209s
	[INFO] 10.244.0.13:44344 - 26272 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005410445s
	[INFO] 10.244.0.13:44344 - 26005 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006018948s
	[INFO] 10.244.0.13:60838 - 12332 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005880377s
	[INFO] 10.244.0.13:60838 - 12579 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006174676s
	[INFO] 10.244.0.13:53538 - 12345 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091701s
	[INFO] 10.244.0.13:53538 - 12144 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126528s
	[INFO] 10.244.0.21:59547 - 34898 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213243s
	[INFO] 10.244.0.21:42413 - 63992 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314574s
	[INFO] 10.244.0.21:50534 - 50228 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001818s
	[INFO] 10.244.0.21:44438 - 35236 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136337s
	[INFO] 10.244.0.21:49334 - 10258 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138449s
	[INFO] 10.244.0.21:53611 - 11525 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012321s
	[INFO] 10.244.0.21:33638 - 34118 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007323199s
	[INFO] 10.244.0.21:43427 - 30051 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007940861s
	[INFO] 10.244.0.21:43377 - 12238 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008381865s
	[INFO] 10.244.0.21:40602 - 12057 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.009350731s
	[INFO] 10.244.0.21:47148 - 45016 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007185414s
	[INFO] 10.244.0.21:42834 - 25970 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007493941s
	[INFO] 10.244.0.21:44226 - 13563 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001030468s
	[INFO] 10.244.0.21:36544 - 7675 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001087253s
	[INFO] 10.244.0.25:33322 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238152s
	[INFO] 10.244.0.25:43627 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014501s
	
	
	==> describe nodes <==
	Name:               addons-630093
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-630093
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=addons-630093
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_11_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-630093
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:11:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-630093
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:22:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-630093
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8258e1e2133c40cebfa95f57ba32eee3
	  System UUID:                bf67fca3-467d-49b0-b09d-7f56669f6671
	  Boot ID:                    ac1c7763-4d61-4ba9-8c16-bcbc5ed122b3
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m36s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-bjrmz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-nvslc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-addons-630093                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-sklhp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-addons-630093                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-addons-630093        200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-k9l4p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-addons-630093                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node addons-630093 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node addons-630093 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)  kubelet          Node addons-630093 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  10m                kubelet          Node addons-630093 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                kubelet          Node addons-630093 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                kubelet          Node addons-630093 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node addons-630093 event: Registered Node addons-630093 in Controller
	  Normal   NodeReady                10m                kubelet          Node addons-630093 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[Dec 4 22:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 d8 34 c4 9e fd 08 06
	[  +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[ +35.699001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[Dec 4 22:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 3d b0 9a 20 99 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[  +1.225322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000021] ll header: 00000000: ff ff ff ff ff ff b2 70 f6 e4 04 7e 08 06
	[  +0.023795] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	[  +8.010933] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +18.260065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e b7 56 b9 28 5b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +24.579912] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ca b1 23 b4 91 08 06
	[  +0.000531] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	
	
	==> etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] <==
	{"level":"warn","ts":"2024-12-04T23:11:40.708502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.336878ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033691115604618 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" value_size:3622 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-04T23:11:40.895257Z","caller":"traceutil/trace.go:171","msg":"trace[1109807764] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"279.117548ms","start":"2024-12-04T23:11:40.616120Z","end":"2024-12-04T23:11:40.895238Z","steps":["trace[1109807764] 'process raft request'  (duration: 279.078288ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:40.895484Z","caller":"traceutil/trace.go:171","msg":"trace[215470366] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"387.51899ms","start":"2024-12-04T23:11:40.507954Z","end":"2024-12-04T23:11:40.895473Z","steps":["trace[215470366] 'process raft request'  (duration: 96.858883ms)","trace[215470366] 'compare'  (duration: 103.229726ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T23:11:40.895555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T23:11:40.507931Z","time spent":"387.575868ms","remote":"127.0.0.1:59108","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3684,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" value_size:3622 >> failure:<>"}
	{"level":"info","ts":"2024-12-04T23:11:40.895855Z","caller":"traceutil/trace.go:171","msg":"trace[2076159084] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"288.040682ms","start":"2024-12-04T23:11:40.607803Z","end":"2024-12-04T23:11:40.895844Z","steps":["trace[2076159084] 'process raft request'  (duration: 287.297204ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:40.895959Z","caller":"traceutil/trace.go:171","msg":"trace[705242873] linearizableReadLoop","detail":"{readStateIndex:410; appliedIndex:408; }","duration":"280.349916ms","start":"2024-12-04T23:11:40.615601Z","end":"2024-12-04T23:11:40.895951Z","steps":["trace[705242873] 'read index received'  (duration: 83.684619ms)","trace[705242873] 'applied index is now lower than readState.Index'  (duration: 196.664648ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T23:11:40.896113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.608929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-630093\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-12-04T23:11:40.896138Z","caller":"traceutil/trace.go:171","msg":"trace[1318972100] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-630093; range_end:; response_count:1; response_revision:401; }","duration":"280.640123ms","start":"2024-12-04T23:11:40.615490Z","end":"2024-12-04T23:11:40.896130Z","steps":["trace[1318972100] 'agreement among raft nodes before linearized reading'  (duration: 280.572794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.896264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.36641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:40.896282Z","caller":"traceutil/trace.go:171","msg":"trace[697950005] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:401; }","duration":"280.385448ms","start":"2024-12-04T23:11:40.615891Z","end":"2024-12-04T23:11:40.896276Z","steps":["trace[697950005] 'agreement among raft nodes before linearized reading'  (duration: 280.354047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:41.603321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.477454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:41.603924Z","caller":"traceutil/trace.go:171","msg":"trace[1769666947] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:419; }","duration":"106.090798ms","start":"2024-12-04T23:11:41.497809Z","end":"2024-12-04T23:11:41.603899Z","steps":["trace[1769666947] 'agreement among raft nodes before linearized reading'  (duration: 105.439451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:41.603524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.607937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-630093\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-12-04T23:11:41.604378Z","caller":"traceutil/trace.go:171","msg":"trace[1429916583] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-630093; range_end:; response_count:1; response_revision:419; }","duration":"101.463597ms","start":"2024-12-04T23:11:41.502900Z","end":"2024-12-04T23:11:41.604364Z","steps":["trace[1429916583] 'agreement among raft nodes before linearized reading'  (duration: 100.553991ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:42.012812Z","caller":"traceutil/trace.go:171","msg":"trace[1073586070] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"101.602813ms","start":"2024-12-04T23:11:41.911189Z","end":"2024-12-04T23:11:42.012792Z","steps":["trace[1073586070] 'process raft request'  (duration: 87.210063ms)","trace[1073586070] 'compare'  (duration: 13.942562ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-04T23:11:42.012996Z","caller":"traceutil/trace.go:171","msg":"trace[73910532] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"101.658352ms","start":"2024-12-04T23:11:41.911329Z","end":"2024-12-04T23:11:42.012987Z","steps":["trace[73910532] 'process raft request'  (duration: 101.143669ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:42.013256Z","caller":"traceutil/trace.go:171","msg":"trace[1994636355] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"101.69878ms","start":"2024-12-04T23:11:41.911547Z","end":"2024-12-04T23:11:42.013245Z","steps":["trace[1994636355] 'process raft request'  (duration: 100.967611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:42.096651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.399561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:42.096715Z","caller":"traceutil/trace.go:171","msg":"trace[1209668564] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:440; }","duration":"178.473778ms","start":"2024-12-04T23:11:41.918228Z","end":"2024-12-04T23:11:42.096702Z","steps":["trace[1209668564] 'agreement among raft nodes before linearized reading'  (duration: 178.384048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:42.097064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.915985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:42.099886Z","caller":"traceutil/trace.go:171","msg":"trace[231438469] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:440; }","duration":"181.736324ms","start":"2024-12-04T23:11:41.918132Z","end":"2024-12-04T23:11:42.099868Z","steps":["trace[231438469] 'agreement among raft nodes before linearized reading'  (duration: 178.596552ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:44.318424Z","caller":"traceutil/trace.go:171","msg":"trace[299548537] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"105.793664ms","start":"2024-12-04T23:11:44.212613Z","end":"2024-12-04T23:11:44.318407Z","steps":["trace[299548537] 'process raft request'  (duration: 103.084576ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:21:29.348231Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1851}
	{"level":"info","ts":"2024-12-04T23:21:29.373005Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1851,"took":"24.102146ms","hash":28937920,"current-db-size-bytes":8540160,"current-db-size":"8.5 MB","current-db-size-in-use-bytes":5484544,"current-db-size-in-use":"5.5 MB"}
	{"level":"info","ts":"2024-12-04T23:21:29.373060Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":28937920,"revision":1851,"compact-revision":-1}
	
	
	==> kernel <==
	 23:22:27 up  2:04,  0 users,  load average: 0.19, 0.37, 0.71
	Linux addons-630093 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] <==
	I1204 23:20:27.396182       1 main.go:301] handling current node
	I1204 23:20:37.395674       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:20:37.395711       1 main.go:301] handling current node
	I1204 23:20:47.395839       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:20:47.395878       1 main.go:301] handling current node
	I1204 23:20:57.395664       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:20:57.395702       1 main.go:301] handling current node
	I1204 23:21:07.397452       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:21:07.397501       1 main.go:301] handling current node
	I1204 23:21:17.403496       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:21:17.403542       1 main.go:301] handling current node
	I1204 23:21:27.396540       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:21:27.396587       1 main.go:301] handling current node
	I1204 23:21:37.395694       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:21:37.395737       1 main.go:301] handling current node
	I1204 23:21:47.396055       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:21:47.396093       1 main.go:301] handling current node
	I1204 23:21:57.402038       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:21:57.402077       1 main.go:301] handling current node
	I1204 23:22:07.397678       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:22:07.397730       1 main.go:301] handling current node
	I1204 23:22:17.403599       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:22:17.403646       1 main.go:301] handling current node
	I1204 23:22:27.398718       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:22:27.398802       1 main.go:301] handling current node
	
	
	==> kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] <==
	E1204 23:13:18.021072       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1204 23:13:18.022591       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.81.204:443: connect: connection refused" logger="UnhandledError"
	I1204 23:13:18.053200       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1204 23:13:59.747428       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54842: use of closed network connection
	E1204 23:13:59.921107       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54876: use of closed network connection
	I1204 23:14:08.946781       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.65.33"}
	I1204 23:14:25.954565       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1204 23:14:26.167940       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.235.196"}
	I1204 23:14:28.188596       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1204 23:14:29.205715       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1204 23:20:19.050910       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I1204 23:20:26.058381       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 23:20:26.058427       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 23:20:26.073184       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 23:20:26.073244       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 23:20:26.095228       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 23:20:26.095382       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1204 23:20:26.107180       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1204 23:20:26.107325       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1204 23:20:27.097473       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1204 23:20:27.108991       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1204 23:20:27.118444       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] <==
	E1204 23:20:58.913961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:20:59.913001       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:20:59.913043       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1204 23:21:07.165685       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W1204 23:21:12.842910       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:21:12.842958       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1204 23:21:22.166899       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W1204 23:21:22.883686       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:21:22.883731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:21:26.679959       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:21:26.680029       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:21:29.758536       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:21:29.758577       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1204 23:21:37.166974       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	E1204 23:21:52.167947       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W1204 23:21:53.518195       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:21:53.518240       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:22:01.683467       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:22:01.683517       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1204 23:22:07.168652       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W1204 23:22:10.847614       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:22:10.847660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:22:13.984934       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:22:13.984975       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1204 23:22:22.169455       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] <==
	I1204 23:11:41.999798       1 server_linux.go:66] "Using iptables proxy"
	I1204 23:11:42.522412       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1204 23:11:42.522510       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:11:42.915799       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1204 23:11:42.916905       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:11:42.999168       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:11:42.999868       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:11:42.999987       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:11:43.001630       1 config.go:199] "Starting service config controller"
	I1204 23:11:43.002952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:11:43.002663       1 config.go:328] "Starting node config controller"
	I1204 23:11:43.003244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:11:43.002141       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:11:43.003442       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:11:43.105483       1 shared_informer.go:320] Caches are synced for node config
	I1204 23:11:43.105660       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:11:43.105772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] <==
	W1204 23:11:30.518306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1204 23:11:30.518308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:11:30.518319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1204 23:11:30.518324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:30.518387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:11:30.518406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.464973       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:11:31.465022       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 23:11:31.504488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.504541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.546483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.546559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.565052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:11:31.565112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.572602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 23:11:31.572647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.606116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 23:11:31.606166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.628789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 23:11:31.628843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.663323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.663367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.685908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:11:31.685980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 23:11:33.616392       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 23:21:24 addons-630093 kubelet[1643]: E1204 23:21:24.432118    1643 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 04 23:21:24 addons-630093 kubelet[1643]: E1204 23:21:24.432202    1643 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 04 23:21:24 addons-630093 kubelet[1643]: E1204 23:21:24.432372    1643 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbll2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(7d7d08b6-0c55-4e1e-af14-bcf120b4fe87): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 04 23:21:24 addons-630093 kubelet[1643]: E1204 23:21:24.433578    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:21:27 addons-630093 kubelet[1643]: E1204 23:21:27.811870    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="033304b8-dc25-498d-9212-9e1e40bc9c12"
	Dec 04 23:21:32 addons-630093 kubelet[1643]: E1204 23:21:32.898495    1643 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8, memory: /docker/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/system.slice/kubelet.service"
	Dec 04 23:21:33 addons-630093 kubelet[1643]: E1204 23:21:33.039159    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354493038894107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:21:33 addons-630093 kubelet[1643]: E1204 23:21:33.039201    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354493038894107,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:21:35 addons-630093 kubelet[1643]: E1204 23:21:35.812158    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:21:41 addons-630093 kubelet[1643]: E1204 23:21:41.811996    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="033304b8-dc25-498d-9212-9e1e40bc9c12"
	Dec 04 23:21:43 addons-630093 kubelet[1643]: E1204 23:21:43.041545    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354503041230306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:21:43 addons-630093 kubelet[1643]: E1204 23:21:43.041596    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354503041230306,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:21:50 addons-630093 kubelet[1643]: E1204 23:21:50.811580    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:21:53 addons-630093 kubelet[1643]: E1204 23:21:53.045072    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354513044817879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:21:53 addons-630093 kubelet[1643]: E1204 23:21:53.045107    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354513044817879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:22:01 addons-630093 kubelet[1643]: I1204 23:22:01.810687    1643 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 23:22:03 addons-630093 kubelet[1643]: E1204 23:22:03.047386    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354523047127024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:22:03 addons-630093 kubelet[1643]: E1204 23:22:03.047426    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354523047127024,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:22:03 addons-630093 kubelet[1643]: E1204 23:22:03.811508    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:22:13 addons-630093 kubelet[1643]: E1204 23:22:13.049527    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354533049340382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:22:13 addons-630093 kubelet[1643]: E1204 23:22:13.049558    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354533049340382,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:22:14 addons-630093 kubelet[1643]: E1204 23:22:14.812179    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:22:23 addons-630093 kubelet[1643]: E1204 23:22:23.051618    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354543051409119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:22:23 addons-630093 kubelet[1643]: E1204 23:22:23.051661    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354543051409119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:22:25 addons-630093 kubelet[1643]: E1204 23:22:25.811364    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	
	
	==> storage-provisioner [7579ef87384414e56ddfe0b7d9482bd87f3030a02185f51552230baf2942b017] <==
	I1204 23:11:58.350091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:11:58.357669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:11:58.357713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 23:11:58.365574       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 23:11:58.365696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7e65eeda-0a1f-4ed0-93d5-7510680ef7a9", APIVersion:"v1", ResourceVersion:"914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476 became leader
	I1204 23:11:58.365747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476!
	I1204 23:11:58.466731       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-630093 -n addons-630093
helpers_test.go:261: (dbg) Run:  kubectl --context addons-630093 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq: exit status 1 (84.126166ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-630093/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:14:26 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bg2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-49bg2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  8m2s                 default-scheduler  Successfully assigned default/nginx to addons-630093
	  Normal   Pulling    3m9s (x4 over 8m2s)  kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     117s (x4 over 7m3s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     117s (x4 over 7m3s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    88s (x7 over 7m3s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     88s (x7 over 7m3s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-630093/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:14:23 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbll2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-bbll2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  8m5s                   default-scheduler  Successfully assigned default/task-pv-pod to addons-630093
	  Warning  Failed     7m34s                  kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m48s (x4 over 8m5s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m58s (x4 over 7m34s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m58s (x3 over 6m2s)   kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    2m17s (x7 over 7m33s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     2m17s (x7 over 7m33s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jd9np (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jd9np:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g9mgr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6klmq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 addons disable ingress-dns --alsologtostderr -v=1: (1.520012036s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 addons disable ingress --alsologtostderr -v=1: (7.642632832s)
--- FAIL: TestAddons/parallel/Ingress (492.17s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (355.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I1204 23:14:13.733062  387894 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:394: metrics-server stabilized in 3.105711ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
I1204 23:14:13.743255  387894 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1204 23:14:13.743292  387894 kapi.go:107] duration metric: took 10.240987ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004284602s
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (81.992147ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 2m41.820888693s

                                                
                                                
** /stderr **
I1204 23:14:19.823701  387894 retry.go:31] will retry after 1.823358331s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (65.403186ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 2m43.710651366s

                                                
                                                
** /stderr **
I1204 23:14:21.713155  387894 retry.go:31] will retry after 2.308192652s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (67.379682ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 2m46.086598342s

                                                
                                                
** /stderr **
I1204 23:14:24.089174  387894 retry.go:31] will retry after 4.123445785s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (77.994988ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 2m50.2888254s

                                                
                                                
** /stderr **
I1204 23:14:28.291441  387894 retry.go:31] will retry after 5.884864849s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (66.843254ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 2m56.240534684s

                                                
                                                
** /stderr **
I1204 23:14:34.243451  387894 retry.go:31] will retry after 15.663987031s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (67.072214ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 3m11.972758479s

                                                
                                                
** /stderr **
I1204 23:14:49.975776  387894 retry.go:31] will retry after 31.173950886s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (70.981177ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 3m43.218377027s

                                                
                                                
** /stderr **
I1204 23:15:21.221037  387894 retry.go:31] will retry after 29.740812184s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (67.050034ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 4m13.026214172s

                                                
                                                
** /stderr **
I1204 23:15:51.029326  387894 retry.go:31] will retry after 33.074809668s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (68.138445ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 4m46.170249373s

                                                
                                                
** /stderr **
I1204 23:16:24.172884  387894 retry.go:31] will retry after 1m0.818156577s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (65.560454ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 5m47.054847714s

                                                
                                                
** /stderr **
I1204 23:17:25.057422  387894 retry.go:31] will retry after 53.45259284s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (68.540022ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 6m40.576531406s

                                                
                                                
** /stderr **
I1204 23:18:18.579297  387894 retry.go:31] will retry after 1m3.424609918s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (68.280225ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 7m44.069956511s

                                                
                                                
** /stderr **
I1204 23:19:22.072477  387894 retry.go:31] will retry after 44.403027581s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-630093 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-630093 top pods -n kube-system: exit status 1 (67.941513ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-nvslc, age: 8m28.541263961s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-630093
helpers_test.go:235: (dbg) docker inspect addons-630093:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8",
	        "Created": "2024-12-04T23:11:16.797897353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389943,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-04T23:11:16.916347418Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/hosts",
	        "LogPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8-json.log",
	        "Name": "/addons-630093",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-630093:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-630093",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2-init/diff:/var/lib/docker/overlay2/e1057f3484b1ab78c06169089ecae0d5a5ffb4d6954d3cd93f0938b7adf18020/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-630093",
	                "Source": "/var/lib/docker/volumes/addons-630093/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-630093",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-630093",
	                "name.minikube.sigs.k8s.io": "addons-630093",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "38d3a3f6bb8d75ec22d0acfa9ec923dac8873b55e0bf68a977ec8a7eab9fc43d",
	            "SandboxKey": "/var/run/docker/netns/38d3a3f6bb8d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-630093": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a921fd89d48682e01ff03a455275f7258f4c5b5f271375ec1d96882eeae0da5a",
	                    "EndpointID": "1045d162f6b6ab28f4f633530bdbe7b45cc7c49fe1d735b103b4e8f31f8aba3e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-630093",
	                        "172acc3450ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-630093 -n addons-630093
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 logs -n 25: (1.214919546s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-287298              | download-only-287298   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | -o=json --download-only              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | -p download-only-701357              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-701357              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-287298              | download-only-287298   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-701357              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | --download-only -p                   | download-docker-758817 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | download-docker-758817               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-758817            | download-docker-758817 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | --download-only -p                   | binary-mirror-223027   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | binary-mirror-223027                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45271               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-223027              | binary-mirror-223027   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| addons  | disable dashboard -p                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | addons-630093                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | addons-630093                        |                        |         |         |                     |                     |
	| start   | -p addons-630093 --wait=true         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:13 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:13 UTC | 04 Dec 24 23:13 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:13 UTC | 04 Dec 24 23:14 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | -p addons-630093                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-630093 ip                     | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:19 UTC |                     |
	|         | storage-provisioner-rancher          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:10:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:10:54.556147  389201 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:10:54.556275  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:54.556285  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:10:54.556289  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:54.556510  389201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:10:54.557204  389201 out.go:352] Setting JSON to false
	I1204 23:10:54.558202  389201 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6804,"bootTime":1733347051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:10:54.558281  389201 start.go:139] virtualization: kvm guest
	I1204 23:10:54.560449  389201 out.go:177] * [addons-630093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:10:54.561800  389201 notify.go:220] Checking for updates...
	I1204 23:10:54.561821  389201 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:10:54.563229  389201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:10:54.564678  389201 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:10:54.566233  389201 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:10:54.567553  389201 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:10:54.568781  389201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:10:54.570554  389201 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:10:54.592245  389201 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:10:54.592340  389201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:54.635748  389201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:54.62674737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:54.635854  389201 docker.go:318] overlay module found
	I1204 23:10:54.637780  389201 out.go:177] * Using the docker driver based on user configuration
	I1204 23:10:54.639298  389201 start.go:297] selected driver: docker
	I1204 23:10:54.639319  389201 start.go:901] validating driver "docker" against <nil>
	I1204 23:10:54.639333  389201 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:10:54.640090  389201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:54.684497  389201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:54.676209306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:54.684673  389201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:10:54.684915  389201 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:10:54.686872  389201 out.go:177] * Using Docker driver with root privileges
	I1204 23:10:54.688173  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:10:54.688255  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:10:54.688267  389201 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:10:54.688343  389201 start.go:340] cluster config:
	{Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:10:54.689848  389201 out.go:177] * Starting "addons-630093" primary control-plane node in "addons-630093" cluster
	I1204 23:10:54.691334  389201 cache.go:121] Beginning downloading kic base image for docker with crio
	I1204 23:10:54.692886  389201 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:10:54.694391  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:10:54.694445  389201 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:10:54.694446  389201 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:10:54.694486  389201 cache.go:56] Caching tarball of preloaded images
	I1204 23:10:54.694592  389201 preload.go:172] Found /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:10:54.694609  389201 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:10:54.695076  389201 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json ...
	I1204 23:10:54.695108  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json: {Name:mk972e12a39ea9a33ae63a1f9239f64d658df51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:10:54.710108  389201 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:54.710258  389201 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1204 23:10:54.710280  389201 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1204 23:10:54.710287  389201 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1204 23:10:54.710299  389201 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1204 23:10:54.710311  389201 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1204 23:11:08.081763  389201 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1204 23:11:08.081807  389201 cache.go:194] Successfully downloaded all kic artifacts
	I1204 23:11:08.081860  389201 start.go:360] acquireMachinesLock for addons-630093: {Name:mk65aca0e5e36a044494f94ee0e0497ac2b0ebab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:08.081970  389201 start.go:364] duration metric: took 86.786µs to acquireMachinesLock for "addons-630093"
	I1204 23:11:08.081996  389201 start.go:93] Provisioning new machine with config: &{Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:08.082085  389201 start.go:125] createHost starting for "" (driver="docker")
	I1204 23:11:08.248667  389201 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1204 23:11:08.249041  389201 start.go:159] libmachine.API.Create for "addons-630093" (driver="docker")
	I1204 23:11:08.249091  389201 client.go:168] LocalClient.Create starting
	I1204 23:11:08.249258  389201 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem
	I1204 23:11:08.313688  389201 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem
	I1204 23:11:08.644970  389201 cli_runner.go:164] Run: docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1204 23:11:08.660700  389201 cli_runner.go:211] docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1204 23:11:08.660788  389201 network_create.go:284] running [docker network inspect addons-630093] to gather additional debugging logs...
	I1204 23:11:08.660826  389201 cli_runner.go:164] Run: docker network inspect addons-630093
	W1204 23:11:08.677347  389201 cli_runner.go:211] docker network inspect addons-630093 returned with exit code 1
	I1204 23:11:08.677402  389201 network_create.go:287] error running [docker network inspect addons-630093]: docker network inspect addons-630093: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-630093 not found
	I1204 23:11:08.677421  389201 network_create.go:289] output of [docker network inspect addons-630093]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-630093 not found
	
	** /stderr **
	I1204 23:11:08.677519  389201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:11:08.695034  389201 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016ec7e0}
	I1204 23:11:08.695093  389201 network_create.go:124] attempt to create docker network addons-630093 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1204 23:11:08.695152  389201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-630093 addons-630093
	I1204 23:11:08.969618  389201 network_create.go:108] docker network addons-630093 192.168.49.0/24 created
	I1204 23:11:08.969673  389201 kic.go:121] calculated static IP "192.168.49.2" for the "addons-630093" container
	I1204 23:11:08.969756  389201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1204 23:11:08.986135  389201 cli_runner.go:164] Run: docker volume create addons-630093 --label name.minikube.sigs.k8s.io=addons-630093 --label created_by.minikube.sigs.k8s.io=true
	I1204 23:11:09.028135  389201 oci.go:103] Successfully created a docker volume addons-630093
	I1204 23:11:09.028233  389201 cli_runner.go:164] Run: docker run --rm --name addons-630093-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --entrypoint /usr/bin/test -v addons-630093:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1204 23:11:12.239841  389201 cli_runner.go:217] Completed: docker run --rm --name addons-630093-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --entrypoint /usr/bin/test -v addons-630093:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (3.211561235s)
	I1204 23:11:12.239873  389201 oci.go:107] Successfully prepared a docker volume addons-630093
	I1204 23:11:12.239893  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:12.239931  389201 kic.go:194] Starting extracting preloaded images to volume ...
	I1204 23:11:12.240003  389201 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-630093:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1204 23:11:16.734062  389201 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-630093:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.493971774s)
	I1204 23:11:16.734103  389201 kic.go:203] duration metric: took 4.49416848s to extract preloaded images to volume ...
	W1204 23:11:16.734242  389201 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1204 23:11:16.734340  389201 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1204 23:11:16.781802  389201 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-630093 --name addons-630093 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-630093 --network addons-630093 --ip 192.168.49.2 --volume addons-630093:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1204 23:11:17.088338  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Running}}
	I1204 23:11:17.106885  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.125610  389201 cli_runner.go:164] Run: docker exec addons-630093 stat /var/lib/dpkg/alternatives/iptables
	I1204 23:11:17.168914  389201 oci.go:144] the created container "addons-630093" has a running status.
	I1204 23:11:17.168961  389201 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa...
	I1204 23:11:17.214837  389201 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1204 23:11:17.235866  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.253714  389201 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1204 23:11:17.253744  389201 kic_runner.go:114] Args: [docker exec --privileged addons-630093 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1204 23:11:17.295280  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.314090  389201 machine.go:93] provisionDockerMachine start ...
	I1204 23:11:17.314213  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:17.333326  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:17.333585  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:17.333604  389201 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 23:11:17.334344  389201 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53382->127.0.0.1:33140: read: connection reset by peer
	I1204 23:11:20.462359  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630093
	
	I1204 23:11:20.462394  389201 ubuntu.go:169] provisioning hostname "addons-630093"
	I1204 23:11:20.462459  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.480144  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:20.480382  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:20.480401  389201 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-630093 && echo "addons-630093" | sudo tee /etc/hostname
	I1204 23:11:20.617685  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630093
	
	I1204 23:11:20.617755  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.634927  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:20.635110  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:20.635127  389201 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-630093' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-630093/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-630093' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:11:20.762943  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:11:20.762974  389201 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20045-381016/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-381016/.minikube}
	I1204 23:11:20.763024  389201 ubuntu.go:177] setting up certificates
	I1204 23:11:20.763037  389201 provision.go:84] configureAuth start
	I1204 23:11:20.763097  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:20.780798  389201 provision.go:143] copyHostCerts
	I1204 23:11:20.780875  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/cert.pem (1123 bytes)
	I1204 23:11:20.780993  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/key.pem (1679 bytes)
	I1204 23:11:20.781063  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/ca.pem (1082 bytes)
	I1204 23:11:20.781117  389201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem org=jenkins.addons-630093 san=[127.0.0.1 192.168.49.2 addons-630093 localhost minikube]
	I1204 23:11:20.868299  389201 provision.go:177] copyRemoteCerts
	I1204 23:11:20.868362  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:11:20.868401  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.885888  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:20.979351  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:11:21.002115  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:11:21.025135  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 23:11:21.048097  389201 provision.go:87] duration metric: took 285.042631ms to configureAuth
	I1204 23:11:21.048133  389201 ubuntu.go:193] setting minikube options for container-runtime
	I1204 23:11:21.048329  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:21.048491  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.065589  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:21.065803  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:21.065829  389201 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:11:21.286767  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:11:21.286801  389201 machine.go:96] duration metric: took 3.972682372s to provisionDockerMachine
	I1204 23:11:21.286818  389201 client.go:171] duration metric: took 13.037716692s to LocalClient.Create
	I1204 23:11:21.286846  389201 start.go:167] duration metric: took 13.037808895s to libmachine.API.Create "addons-630093"
	I1204 23:11:21.286858  389201 start.go:293] postStartSetup for "addons-630093" (driver="docker")
	I1204 23:11:21.286873  389201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:11:21.286987  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:11:21.287090  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.304282  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.395931  389201 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:11:21.399160  389201 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1204 23:11:21.399199  389201 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1204 23:11:21.399213  389201 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1204 23:11:21.399225  389201 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1204 23:11:21.399238  389201 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-381016/.minikube/addons for local assets ...
	I1204 23:11:21.399311  389201 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-381016/.minikube/files for local assets ...
	I1204 23:11:21.399355  389201 start.go:296] duration metric: took 112.489476ms for postStartSetup
	I1204 23:11:21.399706  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:21.416048  389201 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json ...
	I1204 23:11:21.416313  389201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:11:21.416373  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.433021  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.523629  389201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1204 23:11:21.527955  389201 start.go:128] duration metric: took 13.445851769s to createHost
	I1204 23:11:21.527994  389201 start.go:83] releasing machines lock for "addons-630093", held for 13.446010021s
	I1204 23:11:21.528078  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:21.544604  389201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:11:21.544635  389201 ssh_runner.go:195] Run: cat /version.json
	I1204 23:11:21.544698  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.544711  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.562063  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.563107  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.726911  389201 ssh_runner.go:195] Run: systemctl --version
	I1204 23:11:21.731218  389201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:11:21.869255  389201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 23:11:21.873644  389201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:11:21.892231  389201 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1204 23:11:21.892324  389201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:11:21.918534  389201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1204 23:11:21.918567  389201 start.go:495] detecting cgroup driver to use...
	I1204 23:11:21.918609  389201 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1204 23:11:21.918738  389201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:11:21.932783  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:11:21.942996  389201 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:11:21.943047  389201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:11:21.955543  389201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:11:21.968274  389201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:11:22.038339  389201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:11:22.105989  389201 docker.go:233] disabling docker service ...
	I1204 23:11:22.106057  389201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:11:22.125303  389201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:11:22.136595  389201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:11:22.222266  389201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:11:22.302782  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:11:22.313850  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:11:22.329072  389201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:11:22.329153  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.338774  389201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:11:22.338845  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.348617  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.358293  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.368200  389201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:11:22.377304  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.386913  389201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.402803  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.412320  389201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:11:22.420685  389201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:11:22.428658  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:22.500255  389201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:11:22.610956  389201 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:11:22.611044  389201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:11:22.614513  389201 start.go:563] Will wait 60s for crictl version
	I1204 23:11:22.614575  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:11:22.617917  389201 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:11:22.653283  389201 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1204 23:11:22.653370  389201 ssh_runner.go:195] Run: crio --version
	I1204 23:11:22.690618  389201 ssh_runner.go:195] Run: crio --version
	I1204 23:11:22.727703  389201 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1204 23:11:22.729320  389201 cli_runner.go:164] Run: docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:11:22.746518  389201 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1204 23:11:22.750432  389201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:11:22.761195  389201 kubeadm.go:883] updating cluster {Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:11:22.761320  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:22.761379  389201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:11:22.829323  389201 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:11:22.829348  389201 crio.go:433] Images already preloaded, skipping extraction
	I1204 23:11:22.829393  389201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:11:22.862169  389201 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:11:22.862194  389201 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:11:22.862203  389201 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1204 23:11:22.862323  389201 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-630093 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:11:22.862387  389201 ssh_runner.go:195] Run: crio config
	I1204 23:11:22.906710  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:11:22.906743  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:11:22.906760  389201 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:11:22.906791  389201 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-630093 NodeName:addons-630093 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:11:22.906954  389201 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-630093"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:11:22.907084  389201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:11:22.916048  389201 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:11:22.916128  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 23:11:22.924791  389201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1204 23:11:22.942166  389201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:11:22.959356  389201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1204 23:11:22.976793  389201 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1204 23:11:22.980197  389201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:11:22.990601  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:23.062561  389201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:11:23.075015  389201 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093 for IP: 192.168.49.2
	I1204 23:11:23.075040  389201 certs.go:194] generating shared ca certs ...
	I1204 23:11:23.075059  389201 certs.go:226] acquiring lock for ca certs: {Name:mk50fab2a60ec4c58718c6f0f51391a1fd27b49a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.075181  389201 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key
	I1204 23:11:23.204545  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt ...
	I1204 23:11:23.204578  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt: {Name:mkc915739630db1af592b52d8db74ccdd723c7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.204795  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key ...
	I1204 23:11:23.204810  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key: {Name:mk98e76db05ffadd20917a2d52b7c5260ba39b61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.204916  389201 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key
	I1204 23:11:23.290846  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt ...
	I1204 23:11:23.290885  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt: {Name:mkde85a69cd8a6277fae67df41cc397c773bd1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.291129  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key ...
	I1204 23:11:23.291148  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key: {Name:mk4d177cf9edd13c7ad0b568d9086767e339e8d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.291277  389201 certs.go:256] generating profile certs ...
	I1204 23:11:23.291366  389201 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key
	I1204 23:11:23.291400  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt with IP's: []
	I1204 23:11:23.499855  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt ...
	I1204 23:11:23.499895  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: {Name:mk9311f602c7b1a2b44c19176448b2aa4b32b7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.500105  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key ...
	I1204 23:11:23.500123  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key: {Name:mk9ddfb2303f17ccf88a6e5b8c00cffba1cd1a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.500223  389201 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548
	I1204 23:11:23.500249  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1204 23:11:23.788463  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 ...
	I1204 23:11:23.788500  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548: {Name:mk43ba65c92ad4331db8d9847c5ef32165302741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.788694  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548 ...
	I1204 23:11:23.788714  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548: {Name:mkaced9e8196936ffe141d4dc3e6adda91a33533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.788818  389201 certs.go:381] copying /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 -> /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt
	I1204 23:11:23.788916  389201 certs.go:385] copying /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548 -> /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key
	I1204 23:11:23.788997  389201 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key
	I1204 23:11:23.789023  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt with IP's: []
	I1204 23:11:24.148068  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt ...
	I1204 23:11:24.148104  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt: {Name:mk0ee13602067d1cc858c9534a9707d295b361ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:24.148309  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key ...
	I1204 23:11:24.148327  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key: {Name:mk0ba88937bb7ca6e51a8cf0c8d2ef8507f0374f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:24.148532  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 23:11:24.148585  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:11:24.148628  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:11:24.148673  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem (1679 bytes)
	I1204 23:11:24.149367  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:11:24.173224  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:11:24.196229  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:11:24.219088  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:11:24.242335  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 23:11:24.265632  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:11:24.288555  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:11:24.311820  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 23:11:24.334208  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:11:24.356395  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:11:24.373538  389201 ssh_runner.go:195] Run: openssl version
	I1204 23:11:24.378816  389201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:11:24.388861  389201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.392560  389201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:11 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.392635  389201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.399222  389201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:11:24.408373  389201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:11:24.411765  389201 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:11:24.411828  389201 kubeadm.go:392] StartCluster: {Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:11:24.411930  389201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:11:24.412006  389201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:11:24.445620  389201 cri.go:89] found id: ""
	I1204 23:11:24.445692  389201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:11:24.454281  389201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:11:24.462658  389201 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1204 23:11:24.462715  389201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:11:24.471058  389201 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:11:24.471082  389201 kubeadm.go:157] found existing configuration files:
	
	I1204 23:11:24.471133  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:11:24.479379  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:11:24.479446  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:11:24.488299  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:11:24.496565  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:11:24.496635  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:11:24.505412  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:11:24.514190  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:11:24.514243  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:11:24.522477  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:11:24.531365  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:11:24.531421  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:11:24.539416  389201 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1204 23:11:24.592567  389201 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1204 23:11:24.645179  389201 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:11:33.426336  389201 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:11:33.426437  389201 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:11:33.426522  389201 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1204 23:11:33.426572  389201 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1204 23:11:33.426602  389201 kubeadm.go:310] OS: Linux
	I1204 23:11:33.426679  389201 kubeadm.go:310] CGROUPS_CPU: enabled
	I1204 23:11:33.426720  389201 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1204 23:11:33.426798  389201 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1204 23:11:33.426877  389201 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1204 23:11:33.426958  389201 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1204 23:11:33.427034  389201 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1204 23:11:33.427111  389201 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1204 23:11:33.427182  389201 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1204 23:11:33.427243  389201 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1204 23:11:33.427304  389201 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:11:33.427436  389201 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:11:33.427575  389201 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:11:33.427676  389201 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:11:33.429670  389201 out.go:235]   - Generating certificates and keys ...
	I1204 23:11:33.429776  389201 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:11:33.429879  389201 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:11:33.429944  389201 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:11:33.429996  389201 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:11:33.430058  389201 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:11:33.430106  389201 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:11:33.430157  389201 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:11:33.430253  389201 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-630093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1204 23:11:33.430323  389201 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:11:33.430455  389201 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-630093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1204 23:11:33.430550  389201 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:11:33.430624  389201 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:11:33.430694  389201 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:11:33.430742  389201 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:11:33.430787  389201 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:11:33.430873  389201 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:11:33.430954  389201 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:11:33.431013  389201 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:11:33.431063  389201 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:11:33.431131  389201 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:11:33.431189  389201 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:11:33.432586  389201 out.go:235]   - Booting up control plane ...
	I1204 23:11:33.432667  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:11:33.432728  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:11:33.432786  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:11:33.432889  389201 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:11:33.433004  389201 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:11:33.433088  389201 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:11:33.433245  389201 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:11:33.433395  389201 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:11:33.433490  389201 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.66305ms
	I1204 23:11:33.433586  389201 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:11:33.433659  389201 kubeadm.go:310] [api-check] The API server is healthy after 4.001728957s
	I1204 23:11:33.433784  389201 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:11:33.433892  389201 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:11:33.433961  389201 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:11:33.434106  389201 kubeadm.go:310] [mark-control-plane] Marking the node addons-630093 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:11:33.434165  389201 kubeadm.go:310] [bootstrap-token] Using token: 6qxarj.88k5pjf3ytyfzen4
	I1204 23:11:33.435845  389201 out.go:235]   - Configuring RBAC rules ...
	I1204 23:11:33.435945  389201 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:11:33.436019  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:11:33.436136  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:11:33.436246  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:11:33.436351  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:11:33.436423  389201 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:11:33.436515  389201 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:11:33.436552  389201 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:11:33.436626  389201 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:11:33.436642  389201 kubeadm.go:310] 
	I1204 23:11:33.436722  389201 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:11:33.436737  389201 kubeadm.go:310] 
	I1204 23:11:33.436836  389201 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:11:33.436844  389201 kubeadm.go:310] 
	I1204 23:11:33.436864  389201 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:11:33.436913  389201 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:11:33.436961  389201 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:11:33.436967  389201 kubeadm.go:310] 
	I1204 23:11:33.437008  389201 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:11:33.437016  389201 kubeadm.go:310] 
	I1204 23:11:33.437056  389201 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:11:33.437062  389201 kubeadm.go:310] 
	I1204 23:11:33.437107  389201 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:11:33.437170  389201 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:11:33.437258  389201 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:11:33.437274  389201 kubeadm.go:310] 
	I1204 23:11:33.437411  389201 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:11:33.437541  389201 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:11:33.437553  389201 kubeadm.go:310] 
	I1204 23:11:33.437672  389201 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6qxarj.88k5pjf3ytyfzen4 \
	I1204 23:11:33.437797  389201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e2721502eca5fe8af4d77f137e4406b90f31d1565f7dd87db91cf7b9fa1e9057 \
	I1204 23:11:33.437833  389201 kubeadm.go:310] 	--control-plane 
	I1204 23:11:33.437842  389201 kubeadm.go:310] 
	I1204 23:11:33.437945  389201 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:11:33.437954  389201 kubeadm.go:310] 
	I1204 23:11:33.438055  389201 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6qxarj.88k5pjf3ytyfzen4 \
	I1204 23:11:33.438195  389201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e2721502eca5fe8af4d77f137e4406b90f31d1565f7dd87db91cf7b9fa1e9057 
	I1204 23:11:33.438211  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:11:33.438221  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:11:33.439987  389201 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 23:11:33.441251  389201 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 23:11:33.445237  389201 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 23:11:33.445258  389201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 23:11:33.462279  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 23:11:33.665861  389201 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:11:33.665944  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:33.665972  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-630093 minikube.k8s.io/updated_at=2024_12_04T23_11_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=addons-630093 minikube.k8s.io/primary=true
	I1204 23:11:33.673805  389201 ops.go:34] apiserver oom_adj: -16
	I1204 23:11:33.756672  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:34.256804  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:34.757586  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:35.256809  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:35.757274  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:36.256932  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:36.757774  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:37.257415  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:37.756756  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:38.256823  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:38.333806  389201 kubeadm.go:1113] duration metric: took 4.667934536s to wait for elevateKubeSystemPrivileges
	I1204 23:11:38.333851  389201 kubeadm.go:394] duration metric: took 13.922029737s to StartCluster
	I1204 23:11:38.333875  389201 settings.go:142] acquiring lock: {Name:mke2b5bd7468e0e3a170be0f2243b433cdca2b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:38.334020  389201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:11:38.334556  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/kubeconfig: {Name:mk53a4e908644f8dfb244bee65db94736a5dc52e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:38.334826  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:11:38.334847  389201 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:38.334940  389201 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1204 23:11:38.335050  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:38.335067  389201 addons.go:69] Setting yakd=true in profile "addons-630093"
	I1204 23:11:38.335086  389201 addons.go:234] Setting addon yakd=true in "addons-630093"
	I1204 23:11:38.335088  389201 addons.go:69] Setting inspektor-gadget=true in profile "addons-630093"
	I1204 23:11:38.335099  389201 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-630093"
	I1204 23:11:38.335108  389201 addons.go:69] Setting gcp-auth=true in profile "addons-630093"
	I1204 23:11:38.335116  389201 addons.go:234] Setting addon inspektor-gadget=true in "addons-630093"
	I1204 23:11:38.335118  389201 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-630093"
	I1204 23:11:38.335126  389201 mustload.go:65] Loading cluster: addons-630093
	I1204 23:11:38.335120  389201 addons.go:69] Setting storage-provisioner=true in profile "addons-630093"
	I1204 23:11:38.335142  389201 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-630093"
	I1204 23:11:38.335151  389201 addons.go:234] Setting addon storage-provisioner=true in "addons-630093"
	I1204 23:11:38.335142  389201 addons.go:69] Setting ingress=true in profile "addons-630093"
	I1204 23:11:38.335165  389201 addons.go:69] Setting ingress-dns=true in profile "addons-630093"
	I1204 23:11:38.335168  389201 addons.go:234] Setting addon ingress=true in "addons-630093"
	I1204 23:11:38.335170  389201 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-630093"
	I1204 23:11:38.335177  389201 addons.go:234] Setting addon ingress-dns=true in "addons-630093"
	I1204 23:11:38.335175  389201 addons.go:69] Setting metrics-server=true in profile "addons-630093"
	I1204 23:11:38.335186  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335187  389201 addons.go:234] Setting addon metrics-server=true in "addons-630093"
	I1204 23:11:38.335201  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335205  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335251  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335270  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:38.335598  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335639  389201 addons.go:69] Setting registry=true in profile "addons-630093"
	I1204 23:11:38.335664  389201 addons.go:234] Setting addon registry=true in "addons-630093"
	I1204 23:11:38.335690  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335770  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335788  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335788  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335799  389201 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-630093"
	I1204 23:11:38.335865  389201 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-630093"
	I1204 23:11:38.335890  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.336127  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.336356  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335154  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335131  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.337395  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335166  389201 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-630093"
	I1204 23:11:38.337522  389201 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-630093"
	I1204 23:11:38.335779  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.337583  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335154  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335618  389201 addons.go:69] Setting volcano=true in profile "addons-630093"
	I1204 23:11:38.337980  389201 addons.go:234] Setting addon volcano=true in "addons-630093"
	I1204 23:11:38.338050  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.338346  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.338511  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.338659  389201 out.go:177] * Verifying Kubernetes components...
	I1204 23:11:38.338743  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335079  389201 addons.go:69] Setting cloud-spanner=true in profile "addons-630093"
	I1204 23:11:38.339343  389201 addons.go:234] Setting addon cloud-spanner=true in "addons-630093"
	I1204 23:11:38.339416  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.342329  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.343246  389201 addons.go:69] Setting default-storageclass=true in profile "addons-630093"
	I1204 23:11:38.343284  389201 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-630093"
	I1204 23:11:38.343690  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.343795  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:38.335605  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335627  389201 addons.go:69] Setting volumesnapshots=true in profile "addons-630093"
	I1204 23:11:38.344127  389201 addons.go:234] Setting addon volumesnapshots=true in "addons-630093"
	I1204 23:11:38.344187  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.369102  389201 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1204 23:11:38.370392  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 23:11:38.370441  389201 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 23:11:38.370514  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.375367  389201 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1204 23:11:38.376764  389201 out.go:177]   - Using image docker.io/registry:2.8.3
	I1204 23:11:38.378315  389201 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1204 23:11:38.378339  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1204 23:11:38.378415  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.387789  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.390443  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.396264  389201 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1204 23:11:38.397739  389201 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:11:38.397765  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1204 23:11:38.397836  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.403885  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1204 23:11:38.404091  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.406664  389201 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1204 23:11:38.407794  389201 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1204 23:11:38.409084  389201 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:11:38.413429  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1204 23:11:38.413459  389201 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1204 23:11:38.413462  389201 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1204 23:11:38.413531  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.413533  389201 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:11:38.413544  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1204 23:11:38.413597  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.413711  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1204 23:11:38.413833  389201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:11:38.413845  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:11:38.413897  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.414878  389201 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:11:38.414894  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1204 23:11:38.414957  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.416261  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1204 23:11:38.418117  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1204 23:11:38.419304  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1204 23:11:38.420751  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1204 23:11:38.422006  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1204 23:11:38.423748  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1204 23:11:38.424837  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1204 23:11:38.424860  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1204 23:11:38.424941  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.430181  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1204 23:11:38.434134  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1204 23:11:38.434699  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:38.435845  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1204 23:11:38.435868  389201 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1204 23:11:38.435951  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.438678  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:38.444191  389201 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:11:38.444221  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1204 23:11:38.444288  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.451026  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.452847  389201 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1204 23:11:38.454187  389201 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1204 23:11:38.454245  389201 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1204 23:11:38.454263  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1204 23:11:38.454326  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.455564  389201 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1204 23:11:38.455600  389201 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1204 23:11:38.455669  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	W1204 23:11:38.458222  389201 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1204 23:11:38.462209  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.470069  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.470586  389201 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-630093"
	I1204 23:11:38.470686  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.471216  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.473482  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.476209  389201 addons.go:234] Setting addon default-storageclass=true in "addons-630093"
	I1204 23:11:38.476266  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.476733  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.477420  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.486737  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.488076  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.494091  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.494760  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.500157  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.514409  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.517053  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.526764  389201 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1204 23:11:38.528218  389201 out.go:177]   - Using image docker.io/busybox:stable
	I1204 23:11:38.529542  389201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:11:38.529568  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1204 23:11:38.529635  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.532873  389201 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:11:38.532892  389201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:11:38.532949  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.547794  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.550902  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.714491  389201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:11:38.714590  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:11:38.730697  389201 node_ready.go:35] waiting up to 6m0s for node "addons-630093" to be "Ready" ...
	I1204 23:11:38.896083  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 23:11:38.896129  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1204 23:11:38.902650  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:11:38.903274  389201 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1204 23:11:38.903334  389201 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1204 23:11:38.908154  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:11:38.995367  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:11:38.996682  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:11:39.003953  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1204 23:11:39.003987  389201 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1204 23:11:39.009058  389201 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:11:39.009092  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1204 23:11:39.011952  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:11:39.015960  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1204 23:11:39.015992  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1204 23:11:39.095325  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1204 23:11:39.099215  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:11:39.107754  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 23:11:39.107787  389201 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 23:11:39.111656  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:11:39.199729  389201 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:11:39.199775  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1204 23:11:39.206060  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1204 23:11:39.206157  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1204 23:11:39.207660  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:11:39.313681  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:11:39.313712  389201 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 23:11:39.315754  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1204 23:11:39.315836  389201 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1204 23:11:39.402197  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1204 23:11:39.402298  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1204 23:11:39.497285  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:11:39.613001  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:11:39.795499  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1204 23:11:39.795537  389201 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1204 23:11:39.908631  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1204 23:11:39.908730  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1204 23:11:40.110384  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1204 23:11:40.110490  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1204 23:11:40.203583  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1204 23:11:40.203684  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1204 23:11:40.302900  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:11:40.302989  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1204 23:11:40.305736  389201 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.591107897s)
	I1204 23:11:40.305865  389201 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1204 23:11:40.415986  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.513233503s)
	I1204 23:11:40.516873  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1204 23:11:40.516909  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1204 23:11:40.606740  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1204 23:11:40.606836  389201 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1204 23:11:40.706038  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:11:41.013840  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.105639169s)
	I1204 23:11:41.019324  389201 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-630093" context rescaled to 1 replicas
	I1204 23:11:41.019970  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:41.098870  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1204 23:11:41.098907  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1204 23:11:41.103755  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.108338868s)
	I1204 23:11:41.296521  389201 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:41.296620  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1204 23:11:41.604186  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1204 23:11:41.604271  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1204 23:11:41.711584  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:41.895283  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1204 23:11:41.895375  389201 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1204 23:11:42.005218  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1204 23:11:42.005322  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1204 23:11:42.196571  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1204 23:11:42.196687  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1204 23:11:42.209452  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.212725161s)
	I1204 23:11:42.322610  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:11:42.322752  389201 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1204 23:11:42.502862  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:11:42.809979  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.797973312s)
	I1204 23:11:42.810142  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.714779141s)
	I1204 23:11:43.015142  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.91582183s)
	I1204 23:11:43.300319  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:44.520283  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.40857896s)
	I1204 23:11:44.520372  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.02299016s)
	I1204 23:11:44.520392  389201 addons.go:475] Verifying addon ingress=true in "addons-630093"
	I1204 23:11:44.520419  389201 addons.go:475] Verifying addon registry=true in "addons-630093"
	I1204 23:11:44.520330  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.312579258s)
	I1204 23:11:44.520780  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.814712029s)
	I1204 23:11:44.520741  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.907702215s)
	I1204 23:11:44.521986  389201 addons.go:475] Verifying addon metrics-server=true in "addons-630093"
	I1204 23:11:44.522358  389201 out.go:177] * Verifying ingress addon...
	I1204 23:11:44.522391  389201 out.go:177] * Verifying registry addon...
	I1204 23:11:44.523305  389201 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-630093 service yakd-dashboard -n yakd-dashboard
	
	I1204 23:11:44.525119  389201 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1204 23:11:44.525119  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1204 23:11:44.600633  389201 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:11:44.600664  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:44.600855  389201 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1204 23:11:44.600872  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.030335  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:45.031111  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.524701  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.813019436s)
	W1204 23:11:45.524761  389201 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:11:45.524790  389201 retry.go:31] will retry after 181.865687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:11:45.529400  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:45.529925  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.620284  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1204 23:11:45.620363  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:45.640586  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:45.707473  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:45.802964  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:45.916555  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1204 23:11:45.999202  389201 addons.go:234] Setting addon gcp-auth=true in "addons-630093"
	I1204 23:11:45.999264  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:45.999784  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:46.028530  389201 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1204 23:11:46.028595  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:46.031316  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:46.031818  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:46.049437  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:46.408520  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.905505829s)
	I1204 23:11:46.408572  389201 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-630093"
	I1204 23:11:46.410390  389201 out.go:177] * Verifying csi-hostpath-driver addon...
	I1204 23:11:46.413226  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1204 23:11:46.423132  389201 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:11:46.423158  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:46.530521  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:46.530917  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:46.918004  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:47.028913  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:47.029388  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:47.417466  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:47.531801  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:47.532309  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:47.916654  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:48.028517  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:48.029048  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:48.236314  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:48.416588  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:48.528958  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:48.529570  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:48.735256  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.027721867s)
	I1204 23:11:48.735290  389201 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.706722291s)
	I1204 23:11:48.737269  389201 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1204 23:11:48.738737  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:48.739945  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1204 23:11:48.739962  389201 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1204 23:11:48.757606  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1204 23:11:48.757640  389201 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1204 23:11:48.774462  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:11:48.774491  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1204 23:11:48.791359  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:11:48.917479  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:49.028378  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:49.028791  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:49.119035  389201 addons.go:475] Verifying addon gcp-auth=true in "addons-630093"
	I1204 23:11:49.120662  389201 out.go:177] * Verifying gcp-auth addon...
	I1204 23:11:49.123168  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1204 23:11:49.127558  389201 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1204 23:11:49.127594  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:49.417311  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:49.529241  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:49.529771  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:49.626790  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:49.917626  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:50.028348  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:50.028726  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:50.128054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:50.417233  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:50.529158  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:50.529580  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:50.627050  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:50.734676  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:50.917259  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:51.029147  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:51.029767  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:51.126874  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:51.417238  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:51.529239  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:51.529661  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:51.627160  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:51.916950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:52.028762  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:52.029207  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:52.127128  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:52.417313  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:52.529136  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:52.529632  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:52.626885  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:52.917040  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:53.028643  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:53.029069  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:53.126271  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:53.233877  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:53.417285  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:53.529030  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:53.529451  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:53.626877  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:53.917489  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:54.029327  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:54.029771  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:54.127217  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:54.416734  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:54.528697  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:54.529051  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:54.626826  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:54.916888  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:55.028438  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:55.028959  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:55.126396  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:55.234291  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:55.417202  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:55.528962  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:55.529441  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:55.626790  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:55.917367  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:56.028910  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:56.029339  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:56.127003  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:56.416550  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:56.528268  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:56.528637  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:56.626903  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:56.917742  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:57.028644  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:57.029259  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:57.126655  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:57.417402  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:57.528943  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:57.529266  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:57.626610  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:57.802859  389201 node_ready.go:49] node "addons-630093" has status "Ready":"True"
	I1204 23:11:57.802968  389201 node_ready.go:38] duration metric: took 19.072220894s for node "addons-630093" to be "Ready" ...
	I1204 23:11:57.803001  389201 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:11:57.812284  389201 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace to be "Ready" ...
	I1204 23:11:57.918256  389201 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:11:57.918288  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:58.028987  389201 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:11:58.029025  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:58.029163  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:58.128052  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:58.418190  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:58.529517  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:58.529923  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:58.627312  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:58.919346  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:59.029950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:59.030369  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:59.127570  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:59.418251  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:59.530785  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:59.531584  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:59.630759  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:59.818327  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:11:59.918676  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:00.030531  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:00.030960  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:00.127203  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:00.418498  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:00.529214  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:00.529347  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:00.626705  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:00.919036  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:01.029541  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:01.029735  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:01.127079  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:01.417804  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:01.529706  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:01.530306  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:01.626425  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:01.818875  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:01.918913  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:02.029895  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:02.030382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:02.127260  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:02.423666  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:02.529870  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:02.530595  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:02.627705  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:02.918184  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:03.096822  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:03.098279  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:03.126704  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:03.418293  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:03.530189  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:03.531307  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:03.626994  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:03.819175  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:03.919019  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:04.029490  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:04.030689  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:04.127527  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:04.418611  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:04.529829  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:04.530049  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:04.627138  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:04.918884  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:05.029547  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:05.030544  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:05.127501  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:05.418586  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:05.529727  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:05.530098  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:05.629968  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:05.819250  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:05.917895  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:06.030341  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:06.030532  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:06.130159  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:06.417534  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:06.529640  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:06.529905  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:06.626512  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:06.918521  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:07.029270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:07.029688  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:07.127053  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:07.417502  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:07.529692  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:07.530328  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:07.629361  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:07.917534  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:08.029222  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:08.029469  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:08.127082  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:08.319034  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:08.419261  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:08.529942  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:08.530672  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:08.627267  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:08.917968  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:09.029951  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:09.030163  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:09.126878  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:09.418269  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:09.529306  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:09.529537  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:09.627199  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:09.918335  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:10.029495  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:10.029837  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:10.127443  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:10.319436  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:10.418755  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:10.529622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:10.529807  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:10.626252  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:10.917779  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:11.030059  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:11.030182  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:11.127180  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:11.419556  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:11.530723  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:11.531122  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:11.626618  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:11.918234  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:12.029550  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:12.029678  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:12.127740  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:12.418986  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:12.530019  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:12.530137  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:12.630114  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:12.819093  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:12.918200  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:13.029270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:13.029507  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:13.127361  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:13.418296  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:13.528977  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:13.529560  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:13.629701  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:13.918107  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:14.028623  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:14.029060  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:14.126995  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:14.417833  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:14.601066  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:14.601685  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:14.700398  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:14.819539  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:14.918753  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:15.029149  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:15.029311  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:15.127355  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:15.417956  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:15.530046  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:15.530173  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:15.626804  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:15.817465  389201 pod_ready.go:93] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.817493  389201 pod_ready.go:82] duration metric: took 18.005165509s for pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.817504  389201 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.822063  389201 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.822085  389201 pod_ready.go:82] duration metric: took 4.574786ms for pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.822105  389201 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.826436  389201 pod_ready.go:93] pod "etcd-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.826459  389201 pod_ready.go:82] duration metric: took 4.348229ms for pod "etcd-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.826472  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.831213  389201 pod_ready.go:93] pod "kube-apiserver-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.831241  389201 pod_ready.go:82] duration metric: took 4.762165ms for pod "kube-apiserver-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.831254  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.835452  389201 pod_ready.go:93] pod "kube-controller-manager-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.835474  389201 pod_ready.go:82] duration metric: took 4.212413ms for pod "kube-controller-manager-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.835486  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9l4p" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.918128  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:16.028729  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:16.029367  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:16.127315  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:16.216237  389201 pod_ready.go:93] pod "kube-proxy-k9l4p" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:16.216263  389201 pod_ready.go:82] duration metric: took 380.769812ms for pod "kube-proxy-k9l4p" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.216274  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.417739  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:16.529747  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:16.530393  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:16.615744  389201 pod_ready.go:93] pod "kube-scheduler-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:16.615777  389201 pod_ready.go:82] duration metric: took 399.4948ms for pod "kube-scheduler-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.615792  389201 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.629644  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:16.918480  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:17.029640  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:17.030079  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:17.127575  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:17.418114  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:17.528932  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:17.530075  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:17.704033  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:17.998609  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:18.099865  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:18.100201  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:18.197667  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:18.418883  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:18.599572  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:18.600671  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:18.701570  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:18.703573  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:18.920015  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:19.100730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:19.102395  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:19.198834  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:19.418509  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:19.529727  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:19.530383  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:19.626273  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:19.918805  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:20.029240  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:20.029932  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:20.126903  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:20.418249  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:20.529801  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:20.530308  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:20.626097  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:20.918878  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:21.029289  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:21.029519  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:21.122606  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:21.126039  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:21.418484  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:21.529710  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:21.530710  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:21.626146  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:21.918962  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:22.029458  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:22.029740  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:22.127214  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:22.419474  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:22.530071  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:22.530666  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:22.626757  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:22.919558  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:23.030183  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:23.030603  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:23.126737  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:23.419160  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:23.530176  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:23.530357  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:23.622846  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:23.626203  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:23.918700  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:24.028728  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:24.028982  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:24.126654  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:24.417980  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:24.530135  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:24.531100  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:24.627054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:24.918427  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:25.028887  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:25.029218  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:25.126097  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:25.418781  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:25.529648  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:25.529792  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:25.625375  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:25.918175  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:26.029449  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:26.029717  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:26.121949  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:26.125965  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:26.418478  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:26.529251  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:26.529458  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:26.626865  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:26.918569  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:27.029067  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:27.030277  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:27.125626  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:27.418385  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:27.528662  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:27.529405  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:27.628474  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:27.917874  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:28.029704  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:28.029928  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:28.122056  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:28.126396  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:28.419714  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:28.529079  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:28.529300  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:28.628622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:28.918659  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:29.028740  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:29.029352  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:29.126050  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:29.417959  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:29.529472  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:29.530620  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:29.629092  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:29.919400  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:30.030302  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:30.030514  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:30.122668  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:30.126280  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:30.418540  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:30.529288  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:30.529642  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:30.626549  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:30.918094  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:31.028726  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:31.029185  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:31.127032  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:31.418917  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:31.529225  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:31.529895  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:31.626376  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:31.917674  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:32.029127  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:32.029446  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:32.126980  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:32.418178  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:32.529226  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:32.529801  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:32.622787  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:32.629901  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:32.918843  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:33.029651  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:33.029732  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:33.126752  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:33.417866  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:33.529615  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:33.529803  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:33.626861  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:33.918296  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:34.029295  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:34.029827  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:34.126281  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:34.418699  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:34.529505  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:34.529651  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:34.642845  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.016246  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:35.029633  389201 kapi.go:107] duration metric: took 50.504509788s to wait for kubernetes.io/minikube-addons=registry ...
	I1204 23:12:35.030572  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:35.122008  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:35.126344  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.418953  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:35.529492  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:35.629301  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.917990  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:36.029160  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:36.126923  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:36.418071  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:36.530620  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:36.626415  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:36.918072  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:37.030355  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:37.122395  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:37.130220  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:37.418413  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:37.528927  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:37.625990  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:37.918227  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:38.029187  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:38.126369  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:38.417932  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:38.598800  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:38.697192  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:38.919507  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:39.029934  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:39.126608  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:39.417800  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:39.529782  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:39.621784  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:39.626154  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:39.918849  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:40.030159  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:40.126095  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:40.418225  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:40.531480  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:40.626066  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:40.922455  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:41.030073  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:41.132353  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:41.419213  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:41.530198  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:41.623990  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:41.626185  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:41.918285  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:42.029080  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:42.126525  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:42.417894  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:42.530073  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:42.628888  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:42.917931  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:43.029806  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:43.129456  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:43.417942  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:43.530219  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:43.626382  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:43.919862  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:44.030101  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:44.121891  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:44.126376  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:44.418428  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:44.529385  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:44.626961  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:44.918331  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:45.029815  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.130119  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:45.418987  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:45.530112  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.626679  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:45.917695  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.030308  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.122743  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:46.125898  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:46.418369  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.530377  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.626026  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:46.919590  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.029382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.126945  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:47.418103  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.529610  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.626586  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:47.918784  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.030793  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.123333  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:48.125995  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.418085  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.529161  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.625851  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.918833  389201 kapi.go:107] duration metric: took 1m2.505604843s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1204 23:12:49.029518  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.126520  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:49.529429  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.626178  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.028779  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.126359  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.529535  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.621344  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:50.626657  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.029711  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.126167  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.528977  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.625730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.029401  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.126687  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.529779  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.622444  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:52.626730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.029789  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.125660  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.529648  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.625950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.029567  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.126564  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.529619  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.626519  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.029917  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.121799  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:55.125909  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.530199  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.626324  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.029734  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.125940  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.529705  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.626054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.072272  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.122241  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:57.126623  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.529316  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.626270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.029340  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.126509  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.529559  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.626455  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.029135  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.126845  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.529933  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.621754  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:59.625881  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.029773  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.126622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.529528  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.626582  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.029576  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.127058  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.530191  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.622552  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:01.626939  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.030598  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.130438  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.529743  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.626141  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.030953  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.149927  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.529333  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.622858  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:03.626677  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:04.029338  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:04.128963  389201 kapi.go:107] duration metric: took 1m15.005791002s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1204 23:13:04.130952  389201 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-630093 cluster.
	I1204 23:13:04.132630  389201 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1204 23:13:04.134066  389201 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1204 23:13:04.599921  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.100341  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.599382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.623902  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:06.029904  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:06.529164  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.029826  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.531039  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.030122  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.123005  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:08.529214  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.029839  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.529349  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:10.030137  389201 kapi.go:107] duration metric: took 1m25.505015693s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1204 23:13:10.032415  389201 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1204 23:13:10.034021  389201 addons.go:510] duration metric: took 1m31.699072904s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1204 23:13:10.622508  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:13.121894  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:15.622516  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:18.122616  389201 pod_ready.go:93] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:18.122655  389201 pod_ready.go:82] duration metric: took 1m1.506852695s for pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.122671  389201 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.127666  389201 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:18.127689  389201 pod_ready.go:82] duration metric: took 5.009056ms for pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.127712  389201 pod_ready.go:39] duration metric: took 1m20.324660399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:13:18.127736  389201 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:13:18.127773  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:18.127852  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:18.163496  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:18.163523  389201 cri.go:89] found id: ""
	I1204 23:13:18.163535  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:18.163604  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.167359  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:18.167448  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:18.204556  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:18.204586  389201 cri.go:89] found id: ""
	I1204 23:13:18.204598  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:18.204666  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.208385  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:18.208480  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:18.243732  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:18.243758  389201 cri.go:89] found id: ""
	I1204 23:13:18.243766  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:18.243825  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.247475  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:18.247549  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:18.284446  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:18.284481  389201 cri.go:89] found id: ""
	I1204 23:13:18.284494  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:18.284553  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.288056  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:18.288154  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:18.322998  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:18.323035  389201 cri.go:89] found id: ""
	I1204 23:13:18.323071  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:18.323127  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.326560  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:18.326662  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:18.360672  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:18.360695  389201 cri.go:89] found id: ""
	I1204 23:13:18.360704  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:18.360759  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.364394  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:18.364465  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:18.398753  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:18.398779  389201 cri.go:89] found id: ""
	I1204 23:13:18.398788  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:18.398837  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.402272  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:18.402308  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:18.480499  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:18.480540  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:18.524595  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:18.524634  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:18.566986  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:18.567027  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:18.602070  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:18.602102  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:18.658618  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:18.658684  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:18.696622  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:18.696664  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:18.740640  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:18.740679  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:18.779439  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.779629  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.791512  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.791674  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.791800  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.791953  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792143  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792315  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792450  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792613  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792743  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792901  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.793033  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.793194  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.793332  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.793495  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:18.826225  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:18.826269  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:18.853723  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:18.853768  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:18.956948  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:18.956987  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:19.002234  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:19.002271  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:19.041497  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:19.041531  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:19.041595  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:19.041609  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:19.041619  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:19.041628  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:19.041636  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:19.041642  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:19.041649  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:19.041654  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:29.043089  389201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:13:29.058130  389201 api_server.go:72] duration metric: took 1m50.723247239s to wait for apiserver process to appear ...
	I1204 23:13:29.058169  389201 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:13:29.058217  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:29.058262  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:29.093177  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:29.093208  389201 cri.go:89] found id: ""
	I1204 23:13:29.093217  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:29.093301  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.096893  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:29.096964  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:29.132522  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:29.132544  389201 cri.go:89] found id: ""
	I1204 23:13:29.132554  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:29.132596  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.136114  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:29.136174  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:29.171816  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:29.171839  389201 cri.go:89] found id: ""
	I1204 23:13:29.171850  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:29.171897  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.175512  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:29.175584  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:29.212035  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:29.212060  389201 cri.go:89] found id: ""
	I1204 23:13:29.212069  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:29.212116  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.215601  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:29.215669  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:29.251281  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:29.251304  389201 cri.go:89] found id: ""
	I1204 23:13:29.251312  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:29.251358  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.255228  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:29.255342  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:29.290460  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:29.290486  389201 cri.go:89] found id: ""
	I1204 23:13:29.290496  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:29.290559  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.294114  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:29.294191  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:29.330311  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:29.330336  389201 cri.go:89] found id: ""
	I1204 23:13:29.330346  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:29.330396  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.333992  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:29.334023  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:29.368566  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:29.368596  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:29.402199  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:29.402229  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:29.482290  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:29.482339  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:29.510099  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:29.510142  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:29.615012  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:29.615047  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:29.660921  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:29.660962  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:29.704015  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:29.704060  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:29.747065  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:29.747100  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:29.827553  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.827776  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.839459  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.839672  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.839847  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840075  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.840275  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840505  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.840699  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840936  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.841134  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.841361  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.841560  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.841791  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.842000  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.842238  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:29.875377  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:29.875420  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:29.915909  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:29.915942  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:29.975760  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:29.975799  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:30.020004  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:30.020036  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:30.020104  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:30.020121  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:30.020132  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:30.020149  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:30.020164  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:30.020176  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:30.020187  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:30.020199  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:40.021029  389201 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1204 23:13:40.025015  389201 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1204 23:13:40.026016  389201 api_server.go:141] control plane version: v1.31.2
	I1204 23:13:40.026045  389201 api_server.go:131] duration metric: took 10.967868289s to wait for apiserver health ...
	I1204 23:13:40.026053  389201 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:13:40.026087  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:40.026139  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:40.061619  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:40.061656  389201 cri.go:89] found id: ""
	I1204 23:13:40.061667  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:40.061726  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.065276  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:40.065347  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:40.099762  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:40.099784  389201 cri.go:89] found id: ""
	I1204 23:13:40.099791  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:40.099846  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.103315  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:40.103376  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:40.138517  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:40.138548  389201 cri.go:89] found id: ""
	I1204 23:13:40.138558  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:40.138608  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.142278  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:40.142338  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:40.177139  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:40.177162  389201 cri.go:89] found id: ""
	I1204 23:13:40.177169  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:40.177224  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.180724  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:40.180787  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:40.215881  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:40.215909  389201 cri.go:89] found id: ""
	I1204 23:13:40.215921  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:40.215978  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.219605  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:40.219672  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:40.254791  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:40.254818  389201 cri.go:89] found id: ""
	I1204 23:13:40.254830  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:40.254883  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.258537  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:40.258600  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:40.293449  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:40.293476  389201 cri.go:89] found id: ""
	I1204 23:13:40.293486  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:40.293542  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.297150  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:40.297182  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:40.372794  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:40.372843  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:40.419461  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:40.419498  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:40.534097  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:40.534131  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:40.578901  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:40.578941  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:40.616890  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:40.616923  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:40.676313  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:40.676354  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:40.712137  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:40.712171  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:40.749253  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:40.749283  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:40.793451  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.793680  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805200  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.805392  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805575  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.805790  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805984  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.806212  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.806412  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.806670  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.806884  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807109  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.807303  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807526  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.807722  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807952  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:40.842035  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:40.842083  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:40.868911  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:40.868949  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:40.915327  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:40.915367  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:40.958116  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:40.958151  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:40.958253  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:40.958268  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.958278  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.958294  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.958308  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.958323  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:40.958329  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:40.958338  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:50.969322  389201 system_pods.go:59] 19 kube-system pods found
	I1204 23:13:50.969358  389201 system_pods.go:61] "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
	I1204 23:13:50.969363  389201 system_pods.go:61] "coredns-7c65d6cfc9-nvslc" [e12dda0f-2d10-4096-b12f-73bd871cc18e] Running
	I1204 23:13:50.969368  389201 system_pods.go:61] "csi-hostpath-attacher-0" [af4d7f93-4989-4c1d-8c89-43d0e74f1a44] Running
	I1204 23:13:50.969372  389201 system_pods.go:61] "csi-hostpath-resizer-0" [5198084f-6ce5-4b12-89f8-5d8a76057764] Running
	I1204 23:13:50.969375  389201 system_pods.go:61] "csi-hostpathplugin-97jlr" [1d17a273-85e7-4f77-9bbe-7786a88d0ebe] Running
	I1204 23:13:50.969379  389201 system_pods.go:61] "etcd-addons-630093" [7758ddc9-6dfb-4fe8-a37f-1ef8170cd720] Running
	I1204 23:13:50.969382  389201 system_pods.go:61] "kindnet-sklhp" [a2a719ef-fccf-456e-88ac-b6e5fad34e3e] Running
	I1204 23:13:50.969387  389201 system_pods.go:61] "kube-apiserver-addons-630093" [34402f18-4ebe-4e53-9495-549544e9f70c] Running
	I1204 23:13:50.969393  389201 system_pods.go:61] "kube-controller-manager-addons-630093" [e33f5809-04da-4fb0-8265-2e29e7f90e15] Running
	I1204 23:13:50.969408  389201 system_pods.go:61] "kube-ingress-dns-minikube" [4cda5680-90e6-43e2-b35f-bf0976f6fef3] Running
	I1204 23:13:50.969415  389201 system_pods.go:61] "kube-proxy-k9l4p" [bddbd74f-1a8f-4181-b2f7-decc74059f10] Running
	I1204 23:13:50.969420  389201 system_pods.go:61] "kube-scheduler-addons-630093" [1f496311-6985-4c79-a19a-4ceade68e41e] Running
	I1204 23:13:50.969429  389201 system_pods.go:61] "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
	I1204 23:13:50.969434  389201 system_pods.go:61] "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
	I1204 23:13:50.969441  389201 system_pods.go:61] "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
	I1204 23:13:50.969444  389201 system_pods.go:61] "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
	I1204 23:13:50.969453  389201 system_pods.go:61] "snapshot-controller-56fcc65765-2492d" [a604be0a-c061-4a65-9d32-0b98fff12222] Running
	I1204 23:13:50.969458  389201 system_pods.go:61] "snapshot-controller-56fcc65765-xtclh" [845fd71c-634d-41e2-a101-08a0c1458418] Running
	I1204 23:13:50.969461  389201 system_pods.go:61] "storage-provisioner" [cde6de53-e600-4898-a1c3-df78f4d4e6ff] Running
	I1204 23:13:50.969470  389201 system_pods.go:74] duration metric: took 10.943410983s to wait for pod list to return data ...
	I1204 23:13:50.969480  389201 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:13:50.972205  389201 default_sa.go:45] found service account: "default"
	I1204 23:13:50.972229  389201 default_sa.go:55] duration metric: took 2.740927ms for default service account to be created ...
	I1204 23:13:50.972237  389201 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:13:50.980831  389201 system_pods.go:86] 19 kube-system pods found
	I1204 23:13:50.980861  389201 system_pods.go:89] "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
	I1204 23:13:50.980867  389201 system_pods.go:89] "coredns-7c65d6cfc9-nvslc" [e12dda0f-2d10-4096-b12f-73bd871cc18e] Running
	I1204 23:13:50.980872  389201 system_pods.go:89] "csi-hostpath-attacher-0" [af4d7f93-4989-4c1d-8c89-43d0e74f1a44] Running
	I1204 23:13:50.980876  389201 system_pods.go:89] "csi-hostpath-resizer-0" [5198084f-6ce5-4b12-89f8-5d8a76057764] Running
	I1204 23:13:50.980880  389201 system_pods.go:89] "csi-hostpathplugin-97jlr" [1d17a273-85e7-4f77-9bbe-7786a88d0ebe] Running
	I1204 23:13:50.980883  389201 system_pods.go:89] "etcd-addons-630093" [7758ddc9-6dfb-4fe8-a37f-1ef8170cd720] Running
	I1204 23:13:50.980887  389201 system_pods.go:89] "kindnet-sklhp" [a2a719ef-fccf-456e-88ac-b6e5fad34e3e] Running
	I1204 23:13:50.980891  389201 system_pods.go:89] "kube-apiserver-addons-630093" [34402f18-4ebe-4e53-9495-549544e9f70c] Running
	I1204 23:13:50.980895  389201 system_pods.go:89] "kube-controller-manager-addons-630093" [e33f5809-04da-4fb0-8265-2e29e7f90e15] Running
	I1204 23:13:50.980899  389201 system_pods.go:89] "kube-ingress-dns-minikube" [4cda5680-90e6-43e2-b35f-bf0976f6fef3] Running
	I1204 23:13:50.980905  389201 system_pods.go:89] "kube-proxy-k9l4p" [bddbd74f-1a8f-4181-b2f7-decc74059f10] Running
	I1204 23:13:50.980910  389201 system_pods.go:89] "kube-scheduler-addons-630093" [1f496311-6985-4c79-a19a-4ceade68e41e] Running
	I1204 23:13:50.980914  389201 system_pods.go:89] "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
	I1204 23:13:50.980920  389201 system_pods.go:89] "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
	I1204 23:13:50.980926  389201 system_pods.go:89] "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
	I1204 23:13:50.980929  389201 system_pods.go:89] "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
	I1204 23:13:50.980933  389201 system_pods.go:89] "snapshot-controller-56fcc65765-2492d" [a604be0a-c061-4a65-9d32-0b98fff12222] Running
	I1204 23:13:50.980939  389201 system_pods.go:89] "snapshot-controller-56fcc65765-xtclh" [845fd71c-634d-41e2-a101-08a0c1458418] Running
	I1204 23:13:50.980943  389201 system_pods.go:89] "storage-provisioner" [cde6de53-e600-4898-a1c3-df78f4d4e6ff] Running
	I1204 23:13:50.980952  389201 system_pods.go:126] duration metric: took 8.709075ms to wait for k8s-apps to be running ...
	I1204 23:13:50.980961  389201 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:13:50.981009  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:13:50.992805  389201 system_svc.go:56] duration metric: took 11.832695ms WaitForService to wait for kubelet
	I1204 23:13:50.992839  389201 kubeadm.go:582] duration metric: took 2m12.65796392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:13:50.992860  389201 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:13:50.996391  389201 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1204 23:13:50.996430  389201 node_conditions.go:123] node cpu capacity is 8
	I1204 23:13:50.996447  389201 node_conditions.go:105] duration metric: took 3.580009ms to run NodePressure ...
	I1204 23:13:50.996463  389201 start.go:241] waiting for startup goroutines ...
	I1204 23:13:50.996483  389201 start.go:246] waiting for cluster config update ...
	I1204 23:13:50.996508  389201 start.go:255] writing updated cluster config ...
	I1204 23:13:50.996891  389201 ssh_runner.go:195] Run: rm -f paused
	I1204 23:13:51.048677  389201 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 23:13:51.051940  389201 out.go:177] * Done! kubectl is now configured to use "addons-630093" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 23:18:53 addons-630093 crio[1031]: time="2024-12-04 23:18:53.810764245Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=6f76e6a0-dd92-463a-8ff1-401003d7ade5 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:53 addons-630093 crio[1031]: time="2024-12-04 23:18:53.811028333Z" level=info msg="Image docker.io/nginx:alpine not found" id=6f76e6a0-dd92-463a-8ff1-401003d7ade5 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:59 addons-630093 crio[1031]: time="2024-12-04 23:18:59.530849835Z" level=info msg="Pulling image: docker.io/nginx:latest" id=4333e13d-d0b1-4e88-bf0f-ee35ef791fc3 name=/runtime.v1.ImageService/PullImage
	Dec 04 23:18:59 addons-630093 crio[1031]: time="2024-12-04 23:18:59.534910804Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 04 23:18:59 addons-630093 crio[1031]: time="2024-12-04 23:18:59.646955039Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f9939493-f27e-4d1f-a811-a02c3e8752fe name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:59 addons-630093 crio[1031]: time="2024-12-04 23:18:59.647299891Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=f9939493-f27e-4d1f-a811-a02c3e8752fe name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:08 addons-630093 crio[1031]: time="2024-12-04 23:19:08.811285885Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3f58bf79-6103-4a71-908c-dc9be5479a2e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:08 addons-630093 crio[1031]: time="2024-12-04 23:19:08.811523532Z" level=info msg="Image docker.io/nginx:alpine not found" id=3f58bf79-6103-4a71-908c-dc9be5479a2e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:11 addons-630093 crio[1031]: time="2024-12-04 23:19:11.811536416Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e3cdaa3c-c3f6-44fe-96a1-7d5026b8622e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:11 addons-630093 crio[1031]: time="2024-12-04 23:19:11.811778295Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=e3cdaa3c-c3f6-44fe-96a1-7d5026b8622e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:19 addons-630093 crio[1031]: time="2024-12-04 23:19:19.811558458Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3d082061-6fb0-43a9-a133-9cb2abe70d86 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:19 addons-630093 crio[1031]: time="2024-12-04 23:19:19.811782348Z" level=info msg="Image docker.io/nginx:alpine not found" id=3d082061-6fb0-43a9-a133-9cb2abe70d86 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:30 addons-630093 crio[1031]: time="2024-12-04 23:19:30.146500801Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=fdbdcafd-ac5b-4ec6-bac9-6bba23f37fdb name=/runtime.v1.ImageService/PullImage
	Dec 04 23:19:30 addons-630093 crio[1031]: time="2024-12-04 23:19:30.150717007Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 04 23:19:43 addons-630093 crio[1031]: time="2024-12-04 23:19:43.811712576Z" level=info msg="Checking image status: docker.io/nginx:latest" id=02901228-1e80-45ed-98f8-3587e805e02e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:43 addons-630093 crio[1031]: time="2024-12-04 23:19:43.811951884Z" level=info msg="Image docker.io/nginx:latest not found" id=02901228-1e80-45ed-98f8-3587e805e02e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:48 addons-630093 crio[1031]: time="2024-12-04 23:19:48.027889436Z" level=info msg="Stopping container: c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7 (timeout: 30s)" id=5603a7b7-7d34-4b2f-aa23-6b47317be399 name=/runtime.v1.RuntimeService/StopContainer
	Dec 04 23:19:56 addons-630093 crio[1031]: time="2024-12-04 23:19:56.811117928Z" level=info msg="Checking image status: docker.io/nginx:latest" id=4543103a-baf1-4146-aa1b-680a12f9014a name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:56 addons-630093 crio[1031]: time="2024-12-04 23:19:56.811448970Z" level=info msg="Image docker.io/nginx:latest not found" id=4543103a-baf1-4146-aa1b-680a12f9014a name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:20:00 addons-630093 crio[1031]: time="2024-12-04 23:20:00.762160783Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=038464ef-78a2-43f0-89a4-86996706752f name=/runtime.v1.ImageService/PullImage
	Dec 04 23:20:00 addons-630093 crio[1031]: time="2024-12-04 23:20:00.777717835Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 04 23:20:00 addons-630093 crio[1031]: time="2024-12-04 23:20:00.788729262Z" level=info msg="Stopping pod sandbox: e8a2944d65d8c9a05415b163e026680b8a8f0c82a4012e55edbf991f80ade8a3" id=c281e344-4bf7-4c4d-8380-cf522d488985 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 04 23:20:00 addons-630093 crio[1031]: time="2024-12-04 23:20:00.789042794Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d Namespace:local-path-storage ID:e8a2944d65d8c9a05415b163e026680b8a8f0c82a4012e55edbf991f80ade8a3 UID:64785593-c5b1-4a4b-839f-c12c766ae92f NetNS:/var/run/netns/2a857a5d-63b0-4662-a745-0f93c97fd538 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 04 23:20:00 addons-630093 crio[1031]: time="2024-12-04 23:20:00.789196575Z" level=info msg="Deleting pod local-path-storage_helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d from CNI network \"kindnet\" (type=ptp)"
	Dec 04 23:20:00 addons-630093 crio[1031]: time="2024-12-04 23:20:00.832388437Z" level=info msg="Stopped pod sandbox: e8a2944d65d8c9a05415b163e026680b8a8f0c82a4012e55edbf991f80ade8a3" id=c281e344-4bf7-4c4d-8380-cf522d488985 name=/runtime.v1.RuntimeService/StopPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a92f917845840       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   9101d3097d84d       busybox
	19a975e308aa0       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b                             6 minutes ago       Running             controller                               0                   f7e4db205d4a2       ingress-nginx-controller-5f85ff4588-bjrmz
	153039955b8e9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   75bf3104e4902       csi-hostpathplugin-97jlr
	86a86137e5e1a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   75bf3104e4902       csi-hostpathplugin-97jlr
	722cda2e61fdf       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   75bf3104e4902       csi-hostpathplugin-97jlr
	520228ead6e81       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   75bf3104e4902       csi-hostpathplugin-97jlr
	904410f83eb89       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   75bf3104e4902       csi-hostpathplugin-97jlr
	d43b4e626d869       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              patch                                    0                   1453371ecba6e       ingress-nginx-admission-patch-6klmq
	9cfd8f1d1fc9d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              create                                   0                   6a2e4839790d0       ingress-nginx-admission-create-g9mgr
	c0b9ea5a54fce       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   2a9f5fb1eead6       local-path-provisioner-86d989889c-zjwsn
	3c19424241254       gcr.io/cloud-spanner-emulator/emulator@sha256:11b3615343c74d3c4ef7c4668305a87e2cab287dcab89fe2570e8d4d91938927                               7 minutes ago       Running             cloud-spanner-emulator                   0                   7e0131b1c64fc       cloud-spanner-emulator-dc5db94f4-qb868
	31862be06ca2f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   75bf3104e4902       csi-hostpathplugin-97jlr
	c3bf77a4a88bb       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   6be372042ec01       snapshot-controller-56fcc65765-xtclh
	4bde5393ab673       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        7 minutes ago       Running             metrics-server                           0                   483727d0ea1ad       metrics-server-84c5f94fbc-vtkhx
	ad2a02af7805b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   ed2dd407b0f06       snapshot-controller-56fcc65765-2492d
	34d29b45443cc       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             8 minutes ago       Running             minikube-ingress-dns                     0                   fe05a9e0f9e54       kube-ingress-dns-minikube
	facaa7e1e233d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             8 minutes ago       Running             csi-attacher                             0                   5c82f2a4a9fdc       csi-hostpath-attacher-0
	86ba1534808a8       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              8 minutes ago       Running             csi-resizer                              0                   0e397ea764d0c       csi-hostpath-resizer-0
	1c628d0404971       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             8 minutes ago       Running             coredns                                  0                   e5a18048ffd94       coredns-7c65d6cfc9-nvslc
	7579ef8738441       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   53117b6914cba       storage-provisioner
	f0e1e1197d418       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                                           8 minutes ago       Running             kindnet-cni                              0                   8e1077c9b19f2       kindnet-sklhp
	76b8a8033f246       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                                             8 minutes ago       Running             kube-proxy                               0                   7b72d950d834d       kube-proxy-k9l4p
	f25ca8d234e67       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                                             8 minutes ago       Running             kube-scheduler                           0                   6ecfaa8cbb0a8       kube-scheduler-addons-630093
	697a8666b9beb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                                             8 minutes ago       Running             kube-apiserver                           0                   c5cc52570c5da       kube-apiserver-addons-630093
	249b17c70ce14       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             8 minutes ago       Running             etcd                                     0                   5c544b67b37e6       etcd-addons-630093
	c18ad7ba7d7db       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                                             8 minutes ago       Running             kube-controller-manager                  0                   2b2d046f58c6b       kube-controller-manager-addons-630093
	
	
	==> coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] <==
	[INFO] 10.244.0.13:36200 - 58124 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101425s
	[INFO] 10.244.0.13:43691 - 63611 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005338233s
	[INFO] 10.244.0.13:43691 - 63271 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005381209s
	[INFO] 10.244.0.13:44344 - 26272 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005410445s
	[INFO] 10.244.0.13:44344 - 26005 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006018948s
	[INFO] 10.244.0.13:60838 - 12332 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005880377s
	[INFO] 10.244.0.13:60838 - 12579 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006174676s
	[INFO] 10.244.0.13:53538 - 12345 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091701s
	[INFO] 10.244.0.13:53538 - 12144 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126528s
	[INFO] 10.244.0.21:59547 - 34898 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213243s
	[INFO] 10.244.0.21:42413 - 63992 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314574s
	[INFO] 10.244.0.21:50534 - 50228 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001818s
	[INFO] 10.244.0.21:44438 - 35236 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136337s
	[INFO] 10.244.0.21:49334 - 10258 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138449s
	[INFO] 10.244.0.21:53611 - 11525 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012321s
	[INFO] 10.244.0.21:33638 - 34118 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007323199s
	[INFO] 10.244.0.21:43427 - 30051 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007940861s
	[INFO] 10.244.0.21:43377 - 12238 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008381865s
	[INFO] 10.244.0.21:40602 - 12057 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.009350731s
	[INFO] 10.244.0.21:47148 - 45016 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007185414s
	[INFO] 10.244.0.21:42834 - 25970 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007493941s
	[INFO] 10.244.0.21:44226 - 13563 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001030468s
	[INFO] 10.244.0.21:36544 - 7675 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001087253s
	[INFO] 10.244.0.25:33322 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238152s
	[INFO] 10.244.0.25:43627 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014501s
	
	
	==> describe nodes <==
	Name:               addons-630093
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-630093
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=addons-630093
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_11_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-630093
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-630093"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:11:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-630093
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:20:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-630093
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8258e1e2133c40cebfa95f57ba32eee3
	  System UUID:                bf67fca3-467d-49b0-b09d-7f56669f6671
	  Boot ID:                    ac1c7763-4d61-4ba9-8c16-bcbc5ed122b3
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (21 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m16s
	  default                     cloud-spanner-emulator-dc5db94f4-qb868       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m41s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m44s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-bjrmz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m23s
	  kube-system                 coredns-7c65d6cfc9-nvslc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m29s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 csi-hostpathplugin-97jlr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m10s
	  kube-system                 etcd-addons-630093                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m35s
	  kube-system                 kindnet-sklhp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m29s
	  kube-system                 kube-apiserver-addons-630093                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-controller-manager-addons-630093        200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  kube-system                 kube-proxy-k9l4p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m29s
	  kube-system                 kube-scheduler-addons-630093                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m35s
	  kube-system                 metrics-server-84c5f94fbc-vtkhx              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m25s
	  kube-system                 snapshot-controller-56fcc65765-2492d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 snapshot-controller-56fcc65765-xtclh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m22s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m25s
	  local-path-storage          local-path-provisioner-86d989889c-zjwsn      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m24s                  kube-proxy       
	  Normal   Starting                 8m40s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m40s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m40s (x8 over 8m40s)  kubelet          Node addons-630093 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m40s (x8 over 8m40s)  kubelet          Node addons-630093 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m40s (x7 over 8m40s)  kubelet          Node addons-630093 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m35s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m34s                  kubelet          Node addons-630093 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m34s                  kubelet          Node addons-630093 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m34s                  kubelet          Node addons-630093 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m30s                  node-controller  Node addons-630093 event: Registered Node addons-630093 in Controller
	  Normal   NodeReady                8m10s                  kubelet          Node addons-630093 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[Dec 4 22:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 d8 34 c4 9e fd 08 06
	[  +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[ +35.699001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[Dec 4 22:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 3d b0 9a 20 99 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[  +1.225322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000021] ll header: 00000000: ff ff ff ff ff ff b2 70 f6 e4 04 7e 08 06
	[  +0.023795] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	[  +8.010933] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +18.260065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e b7 56 b9 28 5b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +24.579912] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ca b1 23 b4 91 08 06
	[  +0.000531] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	
	
	==> etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] <==
	{"level":"info","ts":"2024-12-04T23:11:40.217773Z","caller":"traceutil/trace.go:171","msg":"trace[1405136476] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-630093; range_end:; response_count:1; response_revision:392; }","duration":"108.112329ms","start":"2024-12-04T23:11:40.109647Z","end":"2024-12-04T23:11:40.217759Z","steps":["trace[1405136476] 'agreement among raft nodes before linearized reading'  (duration: 103.402111ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.605094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.675544ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-04T23:11:40.605257Z","caller":"traceutil/trace.go:171","msg":"trace[803689926] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:398; }","duration":"198.852168ms","start":"2024-12-04T23:11:40.406387Z","end":"2024-12-04T23:11:40.605239Z","steps":["trace[803689926] 'range keys from in-memory index tree'  (duration: 194.382666ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.708502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.336878ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033691115604618 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" value_size:3622 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-04T23:11:40.895257Z","caller":"traceutil/trace.go:171","msg":"trace[1109807764] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"279.117548ms","start":"2024-12-04T23:11:40.616120Z","end":"2024-12-04T23:11:40.895238Z","steps":["trace[1109807764] 'process raft request'  (duration: 279.078288ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:40.895484Z","caller":"traceutil/trace.go:171","msg":"trace[215470366] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"387.51899ms","start":"2024-12-04T23:11:40.507954Z","end":"2024-12-04T23:11:40.895473Z","steps":["trace[215470366] 'process raft request'  (duration: 96.858883ms)","trace[215470366] 'compare'  (duration: 103.229726ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T23:11:40.895555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T23:11:40.507931Z","time spent":"387.575868ms","remote":"127.0.0.1:59108","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3684,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" value_size:3622 >> failure:<>"}
	{"level":"info","ts":"2024-12-04T23:11:40.895855Z","caller":"traceutil/trace.go:171","msg":"trace[2076159084] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"288.040682ms","start":"2024-12-04T23:11:40.607803Z","end":"2024-12-04T23:11:40.895844Z","steps":["trace[2076159084] 'process raft request'  (duration: 287.297204ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:40.895959Z","caller":"traceutil/trace.go:171","msg":"trace[705242873] linearizableReadLoop","detail":"{readStateIndex:410; appliedIndex:408; }","duration":"280.349916ms","start":"2024-12-04T23:11:40.615601Z","end":"2024-12-04T23:11:40.895951Z","steps":["trace[705242873] 'read index received'  (duration: 83.684619ms)","trace[705242873] 'applied index is now lower than readState.Index'  (duration: 196.664648ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T23:11:40.896113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.608929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-630093\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-12-04T23:11:40.896138Z","caller":"traceutil/trace.go:171","msg":"trace[1318972100] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-630093; range_end:; response_count:1; response_revision:401; }","duration":"280.640123ms","start":"2024-12-04T23:11:40.615490Z","end":"2024-12-04T23:11:40.896130Z","steps":["trace[1318972100] 'agreement among raft nodes before linearized reading'  (duration: 280.572794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.896264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.36641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:40.896282Z","caller":"traceutil/trace.go:171","msg":"trace[697950005] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:401; }","duration":"280.385448ms","start":"2024-12-04T23:11:40.615891Z","end":"2024-12-04T23:11:40.896276Z","steps":["trace[697950005] 'agreement among raft nodes before linearized reading'  (duration: 280.354047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:41.603321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.477454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:41.603924Z","caller":"traceutil/trace.go:171","msg":"trace[1769666947] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:419; }","duration":"106.090798ms","start":"2024-12-04T23:11:41.497809Z","end":"2024-12-04T23:11:41.603899Z","steps":["trace[1769666947] 'agreement among raft nodes before linearized reading'  (duration: 105.439451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:41.603524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.607937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-630093\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-12-04T23:11:41.604378Z","caller":"traceutil/trace.go:171","msg":"trace[1429916583] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-630093; range_end:; response_count:1; response_revision:419; }","duration":"101.463597ms","start":"2024-12-04T23:11:41.502900Z","end":"2024-12-04T23:11:41.604364Z","steps":["trace[1429916583] 'agreement among raft nodes before linearized reading'  (duration: 100.553991ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:42.012812Z","caller":"traceutil/trace.go:171","msg":"trace[1073586070] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"101.602813ms","start":"2024-12-04T23:11:41.911189Z","end":"2024-12-04T23:11:42.012792Z","steps":["trace[1073586070] 'process raft request'  (duration: 87.210063ms)","trace[1073586070] 'compare'  (duration: 13.942562ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-04T23:11:42.012996Z","caller":"traceutil/trace.go:171","msg":"trace[73910532] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"101.658352ms","start":"2024-12-04T23:11:41.911329Z","end":"2024-12-04T23:11:42.012987Z","steps":["trace[73910532] 'process raft request'  (duration: 101.143669ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:42.013256Z","caller":"traceutil/trace.go:171","msg":"trace[1994636355] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"101.69878ms","start":"2024-12-04T23:11:41.911547Z","end":"2024-12-04T23:11:42.013245Z","steps":["trace[1994636355] 'process raft request'  (duration: 100.967611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:42.096651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.399561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:42.096715Z","caller":"traceutil/trace.go:171","msg":"trace[1209668564] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:440; }","duration":"178.473778ms","start":"2024-12-04T23:11:41.918228Z","end":"2024-12-04T23:11:42.096702Z","steps":["trace[1209668564] 'agreement among raft nodes before linearized reading'  (duration: 178.384048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:42.097064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.915985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:42.099886Z","caller":"traceutil/trace.go:171","msg":"trace[231438469] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:440; }","duration":"181.736324ms","start":"2024-12-04T23:11:41.918132Z","end":"2024-12-04T23:11:42.099868Z","steps":["trace[231438469] 'agreement among raft nodes before linearized reading'  (duration: 178.596552ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:44.318424Z","caller":"traceutil/trace.go:171","msg":"trace[299548537] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"105.793664ms","start":"2024-12-04T23:11:44.212613Z","end":"2024-12-04T23:11:44.318407Z","steps":["trace[299548537] 'process raft request'  (duration: 103.084576ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:20:07 up  2:02,  0 users,  load average: 0.35, 0.50, 0.80
	Linux addons-630093 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] <==
	I1204 23:18:07.395843       1 main.go:301] handling current node
	I1204 23:18:17.398735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:17.398775       1 main.go:301] handling current node
	I1204 23:18:27.395698       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:27.395786       1 main.go:301] handling current node
	I1204 23:18:37.402744       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:37.402787       1 main.go:301] handling current node
	I1204 23:18:47.396592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:47.396635       1 main.go:301] handling current node
	I1204 23:18:57.395818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:57.395863       1 main.go:301] handling current node
	I1204 23:19:07.397501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:07.397546       1 main.go:301] handling current node
	I1204 23:19:17.398712       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:17.398746       1 main.go:301] handling current node
	I1204 23:19:27.398720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:27.398771       1 main.go:301] handling current node
	I1204 23:19:37.402734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:37.402778       1 main.go:301] handling current node
	I1204 23:19:47.395778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:47.395820       1 main.go:301] handling current node
	I1204 23:19:57.395656       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:57.395708       1 main.go:301] handling current node
	I1204 23:20:07.395877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:20:07.395937       1 main.go:301] handling current node
	
	
	==> kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] <==
	E1204 23:11:57.667972       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.135.57:443: connect: connection refused" logger="UnhandledError"
	W1204 23:12:44.501182       1 handler_proxy.go:99] no RequestInfo found in the context
	W1204 23:12:44.501182       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 23:12:44.501270       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1204 23:12:44.501295       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 23:12:44.502403       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 23:12:44.502426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 23:13:18.020994       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 23:13:18.021061       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.81.204:443: connect: connection refused" logger="UnhandledError"
	E1204 23:13:18.021072       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1204 23:13:18.022591       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.81.204:443: connect: connection refused" logger="UnhandledError"
	I1204 23:13:18.053200       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1204 23:13:59.747428       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54842: use of closed network connection
	E1204 23:13:59.921107       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54876: use of closed network connection
	I1204 23:14:08.946781       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.65.33"}
	I1204 23:14:25.954565       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1204 23:14:26.167940       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.235.196"}
	I1204 23:14:28.188596       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1204 23:14:29.205715       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] <==
	E1204 23:14:39.494738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1204 23:14:39.957349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="12.035µs"
	I1204 23:14:50.067311       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W1204 23:14:51.659881       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:14:51.659934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:15:15.331968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:15:15.332023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:15:41.664844       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:15:41.664897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:16:29.575804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:16:29.575854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:17:02.559821       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:17:02.559870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:17:45.806997       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:17:45.807050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:18:26.298216       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:18:26.298264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:19:04.552124       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:19:04.552173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1204 23:19:41.406992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-630093"
	I1204 23:19:48.019395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="12.981µs"
	E1204 23:19:52.162747       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W1204 23:19:52.414535       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:19:52.414579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1204 23:20:07.163168       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	
	
	==> kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] <==
	I1204 23:11:41.999798       1 server_linux.go:66] "Using iptables proxy"
	I1204 23:11:42.522412       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1204 23:11:42.522510       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:11:42.915799       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1204 23:11:42.916905       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:11:42.999168       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:11:42.999868       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:11:42.999987       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:11:43.001630       1 config.go:199] "Starting service config controller"
	I1204 23:11:43.002952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:11:43.002663       1 config.go:328] "Starting node config controller"
	I1204 23:11:43.003244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:11:43.002141       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:11:43.003442       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:11:43.105483       1 shared_informer.go:320] Caches are synced for node config
	I1204 23:11:43.105660       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:11:43.105772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] <==
	W1204 23:11:30.518306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1204 23:11:30.518308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:11:30.518319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1204 23:11:30.518324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:30.518387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:11:30.518406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.464973       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:11:31.465022       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 23:11:31.504488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.504541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.546483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.546559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.565052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:11:31.565112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.572602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 23:11:31.572647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.606116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 23:11:31.606166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.628789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 23:11:31.628843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.663323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.663367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.685908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:11:31.685980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 23:11:33.616392       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 23:19:30 addons-630093 kubelet[1643]: E1204 23:19:30.147619    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:19:33 addons-630093 kubelet[1643]: E1204 23:19:33.010209    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354373009929666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:33 addons-630093 kubelet[1643]: E1204 23:19:33.010265    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354373009929666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:43 addons-630093 kubelet[1643]: E1204 23:19:43.012198    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354383011899100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:43 addons-630093 kubelet[1643]: E1204 23:19:43.012231    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354383011899100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:43 addons-630093 kubelet[1643]: E1204 23:19:43.812243    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:19:53 addons-630093 kubelet[1643]: E1204 23:19:53.015145    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354393014838853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:53 addons-630093 kubelet[1643]: E1204 23:19:53.015190    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354393014838853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:56 addons-630093 kubelet[1643]: E1204 23:19:56.811695    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:20:00 addons-630093 kubelet[1643]: E1204 23:20:00.761634    1643 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 04 23:20:00 addons-630093 kubelet[1643]: E1204 23:20:00.761717    1643 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 04 23:20:00 addons-630093 kubelet[1643]: E1204 23:20:00.761978    1643 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:helper-pod,Image:docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79,Command:[/bin/sh /script/setup],Args:[-p /opt/local-path-provisioner/pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d_default_test-pvc -s 67108864 -m Filesystem],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:VOL_DIR,Value:/opt/local-path-provisioner/pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d_default_test-pvc,ValueFrom:nil,},EnvVar{Name:VOL_MODE,Value:Filesystem,ValueFrom:nil,},EnvVar{Name:VOL_SIZE_BYTES,Value:67108864,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:script,ReadOnly:false,MountPath:/script,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:data,ReadOnly:false,MountPath:/
opt/local-path-provisioner/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtvmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d_local-path-storage(64785593-c5b1-4a4b-839f-c12c766ae92f): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.c
om/increase-rate-limit" logger="UnhandledError"
	Dec 04 23:20:00 addons-630093 kubelet[1643]: E1204 23:20:00.763301    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d" podUID="64785593-c5b1-4a4b-839f-c12c766ae92f"
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.004069    1643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/64785593-c5b1-4a4b-839f-c12c766ae92f-data\") pod \"64785593-c5b1-4a4b-839f-c12c766ae92f\" (UID: \"64785593-c5b1-4a4b-839f-c12c766ae92f\") "
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.004133    1643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/64785593-c5b1-4a4b-839f-c12c766ae92f-script\") pod \"64785593-c5b1-4a4b-839f-c12c766ae92f\" (UID: \"64785593-c5b1-4a4b-839f-c12c766ae92f\") "
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.004171    1643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qtvmh\" (UniqueName: \"kubernetes.io/projected/64785593-c5b1-4a4b-839f-c12c766ae92f-kube-api-access-qtvmh\") pod \"64785593-c5b1-4a4b-839f-c12c766ae92f\" (UID: \"64785593-c5b1-4a4b-839f-c12c766ae92f\") "
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.004161    1643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/64785593-c5b1-4a4b-839f-c12c766ae92f-data" (OuterVolumeSpecName: "data") pod "64785593-c5b1-4a4b-839f-c12c766ae92f" (UID: "64785593-c5b1-4a4b-839f-c12c766ae92f"). InnerVolumeSpecName "data". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.004289    1643 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/64785593-c5b1-4a4b-839f-c12c766ae92f-data\") on node \"addons-630093\" DevicePath \"\""
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.004566    1643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64785593-c5b1-4a4b-839f-c12c766ae92f-script" (OuterVolumeSpecName: "script") pod "64785593-c5b1-4a4b-839f-c12c766ae92f" (UID: "64785593-c5b1-4a4b-839f-c12c766ae92f"). InnerVolumeSpecName "script". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.006473    1643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64785593-c5b1-4a4b-839f-c12c766ae92f-kube-api-access-qtvmh" (OuterVolumeSpecName: "kube-api-access-qtvmh") pod "64785593-c5b1-4a4b-839f-c12c766ae92f" (UID: "64785593-c5b1-4a4b-839f-c12c766ae92f"). InnerVolumeSpecName "kube-api-access-qtvmh". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.104941    1643 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qtvmh\" (UniqueName: \"kubernetes.io/projected/64785593-c5b1-4a4b-839f-c12c766ae92f-kube-api-access-qtvmh\") on node \"addons-630093\" DevicePath \"\""
	Dec 04 23:20:01 addons-630093 kubelet[1643]: I1204 23:20:01.104982    1643 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/64785593-c5b1-4a4b-839f-c12c766ae92f-script\") on node \"addons-630093\" DevicePath \"\""
	Dec 04 23:20:02 addons-630093 kubelet[1643]: I1204 23:20:02.812828    1643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64785593-c5b1-4a4b-839f-c12c766ae92f" path="/var/lib/kubelet/pods/64785593-c5b1-4a4b-839f-c12c766ae92f/volumes"
	Dec 04 23:20:03 addons-630093 kubelet[1643]: E1204 23:20:03.017143    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354403016849871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:20:03 addons-630093 kubelet[1643]: E1204 23:20:03.017178    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354403016849871,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7579ef87384414e56ddfe0b7d9482bd87f3030a02185f51552230baf2942b017] <==
	I1204 23:11:58.350091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:11:58.357669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:11:58.357713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 23:11:58.365574       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 23:11:58.365696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7e65eeda-0a1f-4ed0-93d5-7510680ef7a9", APIVersion:"v1", ResourceVersion:"914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476 became leader
	I1204 23:11:58.365747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476!
	I1204 23:11:58.466731       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-630093 -n addons-630093
helpers_test.go:261: (dbg) Run:  kubectl --context addons-630093 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq: exit status 1 (87.599118ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-630093/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:14:26 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bg2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-49bg2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  5m42s                 default-scheduler  Successfully assigned default/nginx to addons-630093
	  Warning  Failed     100s (x3 over 4m43s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     100s (x3 over 4m43s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    60s (x5 over 4m43s)   kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     60s (x5 over 4m43s)   kubelet            Error: ImagePullBackOff
	  Normal   Pulling    49s (x4 over 5m42s)   kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-630093/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:14:23 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbll2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-bbll2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m45s                default-scheduler  Successfully assigned default/task-pv-pod to addons-630093
	  Warning  Failed     5m14s                kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    88s (x4 over 5m45s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     38s (x4 over 5m14s)  kubelet            Error: ErrImagePull
	  Warning  Failed     38s (x3 over 3m42s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    12s (x6 over 5m13s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     12s (x6 over 5m13s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jd9np (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jd9np:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g9mgr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6klmq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (355.56s)

                                                
                                    
x
+
TestAddons/parallel/CSI (379.17s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:488: csi-hostpath-driver pods stabilized in 10.260605ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-630093 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-630093 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [7d7d08b6-0c55-4e1e-af14-bcf120b4fe87] Pending
helpers_test.go:344: "task-pv-pod" [7d7d08b6-0c55-4e1e-af14-bcf120b4fe87] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:329: TestAddons/parallel/CSI: WARNING: pod list for "default" "app=task-pv-pod" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:506: ***** TestAddons/parallel/CSI: pod "app=task-pv-pod" failed to start within 6m0s: context deadline exceeded ****
addons_test.go:506: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-630093 -n addons-630093
addons_test.go:506: TestAddons/parallel/CSI: showing logs for failed pods as of 2024-12-04 23:20:23.363449556 +0000 UTC m=+583.229434657
addons_test.go:506: (dbg) Run:  kubectl --context addons-630093 describe po task-pv-pod -n default
addons_test.go:506: (dbg) kubectl --context addons-630093 describe po task-pv-pod -n default:
Name:             task-pv-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             addons-630093/192.168.49.2
Start Time:       Wed, 04 Dec 2024 23:14:23 +0000
Labels:           app=task-pv-pod
Annotations:      <none>
Status:           Pending
IP:               10.244.0.26
IPs:
IP:  10.244.0.26
Containers:
task-pv-container:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/usr/share/nginx/html from task-pv-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbll2 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
task-pv-storage:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  hpvc
ReadOnly:   false
kube-api-access-bbll2:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  6m                   default-scheduler  Successfully assigned default/task-pv-pod to addons-630093
Warning  Failed     5m29s                kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    103s (x4 over 6m)    kubelet            Pulling image "docker.io/nginx"
Warning  Failed     53s (x4 over 5m29s)  kubelet            Error: ErrImagePull
Warning  Failed     53s (x3 over 3m57s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    12s (x7 over 5m28s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     12s (x7 over 5m28s)  kubelet            Error: ImagePullBackOff
addons_test.go:506: (dbg) Run:  kubectl --context addons-630093 logs task-pv-pod -n default
addons_test.go:506: (dbg) Non-zero exit: kubectl --context addons-630093 logs task-pv-pod -n default: exit status 1 (70.499463ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "task-pv-container" in pod "task-pv-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
addons_test.go:506: kubectl --context addons-630093 logs task-pv-pod -n default: exit status 1
addons_test.go:507: failed waiting for pod task-pv-pod: app=task-pv-pod within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/CSI]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-630093
helpers_test.go:235: (dbg) docker inspect addons-630093:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8",
	        "Created": "2024-12-04T23:11:16.797897353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389943,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-04T23:11:16.916347418Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/hosts",
	        "LogPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8-json.log",
	        "Name": "/addons-630093",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-630093:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-630093",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2-init/diff:/var/lib/docker/overlay2/e1057f3484b1ab78c06169089ecae0d5a5ffb4d6954d3cd93f0938b7adf18020/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-630093",
	                "Source": "/var/lib/docker/volumes/addons-630093/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-630093",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-630093",
	                "name.minikube.sigs.k8s.io": "addons-630093",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "38d3a3f6bb8d75ec22d0acfa9ec923dac8873b55e0bf68a977ec8a7eab9fc43d",
	            "SandboxKey": "/var/run/docker/netns/38d3a3f6bb8d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-630093": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a921fd89d48682e01ff03a455275f7258f4c5b5f271375ec1d96882eeae0da5a",
	                    "EndpointID": "1045d162f6b6ab28f4f633530bdbe7b45cc7c49fe1d735b103b4e8f31f8aba3e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-630093",
	                        "172acc3450ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-630093 -n addons-630093
helpers_test.go:244: <<< TestAddons/parallel/CSI FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/CSI]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 logs -n 25: (1.201336786s)
helpers_test.go:252: TestAddons/parallel/CSI logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | -p download-only-701357              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-701357              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-287298              | download-only-287298   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-701357              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | --download-only -p                   | download-docker-758817 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | download-docker-758817               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-758817            | download-docker-758817 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | --download-only -p                   | binary-mirror-223027   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | binary-mirror-223027                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45271               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-223027              | binary-mirror-223027   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| addons  | disable dashboard -p                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | addons-630093                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | addons-630093                        |                        |         |         |                     |                     |
	| start   | -p addons-630093 --wait=true         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:13 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:13 UTC | 04 Dec 24 23:13 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:13 UTC | 04 Dec 24 23:14 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | -p addons-630093                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-630093 ip                     | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:19 UTC | 04 Dec 24 23:20 UTC |
	|         | storage-provisioner-rancher          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:20 UTC | 04 Dec 24 23:20 UTC |
	|         | disable metrics-server               |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:20 UTC | 04 Dec 24 23:20 UTC |
	|         | disable cloud-spanner                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:10:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:10:54.556147  389201 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:10:54.556275  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:54.556285  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:10:54.556289  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:54.556510  389201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:10:54.557204  389201 out.go:352] Setting JSON to false
	I1204 23:10:54.558202  389201 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6804,"bootTime":1733347051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:10:54.558281  389201 start.go:139] virtualization: kvm guest
	I1204 23:10:54.560449  389201 out.go:177] * [addons-630093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:10:54.561800  389201 notify.go:220] Checking for updates...
	I1204 23:10:54.561821  389201 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:10:54.563229  389201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:10:54.564678  389201 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:10:54.566233  389201 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:10:54.567553  389201 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:10:54.568781  389201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:10:54.570554  389201 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:10:54.592245  389201 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:10:54.592340  389201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:54.635748  389201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:54.62674737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:54.635854  389201 docker.go:318] overlay module found
	I1204 23:10:54.637780  389201 out.go:177] * Using the docker driver based on user configuration
	I1204 23:10:54.639298  389201 start.go:297] selected driver: docker
	I1204 23:10:54.639319  389201 start.go:901] validating driver "docker" against <nil>
	I1204 23:10:54.639333  389201 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:10:54.640090  389201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:54.684497  389201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:54.676209306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:54.684673  389201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:10:54.684915  389201 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:10:54.686872  389201 out.go:177] * Using Docker driver with root privileges
	I1204 23:10:54.688173  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:10:54.688255  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:10:54.688267  389201 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:10:54.688343  389201 start.go:340] cluster config:
	{Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:10:54.689848  389201 out.go:177] * Starting "addons-630093" primary control-plane node in "addons-630093" cluster
	I1204 23:10:54.691334  389201 cache.go:121] Beginning downloading kic base image for docker with crio
	I1204 23:10:54.692886  389201 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:10:54.694391  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:10:54.694445  389201 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:10:54.694446  389201 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:10:54.694486  389201 cache.go:56] Caching tarball of preloaded images
	I1204 23:10:54.694592  389201 preload.go:172] Found /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:10:54.694609  389201 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:10:54.695076  389201 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json ...
	I1204 23:10:54.695108  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json: {Name:mk972e12a39ea9a33ae63a1f9239f64d658df51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:10:54.710108  389201 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:54.710258  389201 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1204 23:10:54.710280  389201 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1204 23:10:54.710287  389201 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1204 23:10:54.710299  389201 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1204 23:10:54.710311  389201 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1204 23:11:08.081763  389201 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1204 23:11:08.081807  389201 cache.go:194] Successfully downloaded all kic artifacts
	I1204 23:11:08.081860  389201 start.go:360] acquireMachinesLock for addons-630093: {Name:mk65aca0e5e36a044494f94ee0e0497ac2b0ebab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:08.081970  389201 start.go:364] duration metric: took 86.786µs to acquireMachinesLock for "addons-630093"
	I1204 23:11:08.081996  389201 start.go:93] Provisioning new machine with config: &{Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:08.082085  389201 start.go:125] createHost starting for "" (driver="docker")
	I1204 23:11:08.248667  389201 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1204 23:11:08.249041  389201 start.go:159] libmachine.API.Create for "addons-630093" (driver="docker")
	I1204 23:11:08.249091  389201 client.go:168] LocalClient.Create starting
	I1204 23:11:08.249258  389201 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem
	I1204 23:11:08.313688  389201 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem
	I1204 23:11:08.644970  389201 cli_runner.go:164] Run: docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1204 23:11:08.660700  389201 cli_runner.go:211] docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1204 23:11:08.660788  389201 network_create.go:284] running [docker network inspect addons-630093] to gather additional debugging logs...
	I1204 23:11:08.660826  389201 cli_runner.go:164] Run: docker network inspect addons-630093
	W1204 23:11:08.677347  389201 cli_runner.go:211] docker network inspect addons-630093 returned with exit code 1
	I1204 23:11:08.677402  389201 network_create.go:287] error running [docker network inspect addons-630093]: docker network inspect addons-630093: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-630093 not found
	I1204 23:11:08.677421  389201 network_create.go:289] output of [docker network inspect addons-630093]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-630093 not found
	
	** /stderr **
	I1204 23:11:08.677519  389201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:11:08.695034  389201 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016ec7e0}
	I1204 23:11:08.695093  389201 network_create.go:124] attempt to create docker network addons-630093 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1204 23:11:08.695152  389201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-630093 addons-630093
	I1204 23:11:08.969618  389201 network_create.go:108] docker network addons-630093 192.168.49.0/24 created
	I1204 23:11:08.969673  389201 kic.go:121] calculated static IP "192.168.49.2" for the "addons-630093" container
	I1204 23:11:08.969756  389201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1204 23:11:08.986135  389201 cli_runner.go:164] Run: docker volume create addons-630093 --label name.minikube.sigs.k8s.io=addons-630093 --label created_by.minikube.sigs.k8s.io=true
	I1204 23:11:09.028135  389201 oci.go:103] Successfully created a docker volume addons-630093
	I1204 23:11:09.028233  389201 cli_runner.go:164] Run: docker run --rm --name addons-630093-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --entrypoint /usr/bin/test -v addons-630093:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1204 23:11:12.239841  389201 cli_runner.go:217] Completed: docker run --rm --name addons-630093-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --entrypoint /usr/bin/test -v addons-630093:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (3.211561235s)
	I1204 23:11:12.239873  389201 oci.go:107] Successfully prepared a docker volume addons-630093
	I1204 23:11:12.239893  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:12.239931  389201 kic.go:194] Starting extracting preloaded images to volume ...
	I1204 23:11:12.240003  389201 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-630093:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1204 23:11:16.734062  389201 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-630093:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.493971774s)
	I1204 23:11:16.734103  389201 kic.go:203] duration metric: took 4.49416848s to extract preloaded images to volume ...
	W1204 23:11:16.734242  389201 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1204 23:11:16.734340  389201 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1204 23:11:16.781802  389201 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-630093 --name addons-630093 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-630093 --network addons-630093 --ip 192.168.49.2 --volume addons-630093:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1204 23:11:17.088338  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Running}}
	I1204 23:11:17.106885  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.125610  389201 cli_runner.go:164] Run: docker exec addons-630093 stat /var/lib/dpkg/alternatives/iptables
	I1204 23:11:17.168914  389201 oci.go:144] the created container "addons-630093" has a running status.
	I1204 23:11:17.168961  389201 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa...
	I1204 23:11:17.214837  389201 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1204 23:11:17.235866  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.253714  389201 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1204 23:11:17.253744  389201 kic_runner.go:114] Args: [docker exec --privileged addons-630093 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1204 23:11:17.295280  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.314090  389201 machine.go:93] provisionDockerMachine start ...
	I1204 23:11:17.314213  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:17.333326  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:17.333585  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:17.333604  389201 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 23:11:17.334344  389201 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53382->127.0.0.1:33140: read: connection reset by peer
	I1204 23:11:20.462359  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630093
	
	I1204 23:11:20.462394  389201 ubuntu.go:169] provisioning hostname "addons-630093"
	I1204 23:11:20.462459  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.480144  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:20.480382  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:20.480401  389201 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-630093 && echo "addons-630093" | sudo tee /etc/hostname
	I1204 23:11:20.617685  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630093
	
	I1204 23:11:20.617755  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.634927  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:20.635110  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:20.635127  389201 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-630093' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-630093/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-630093' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:11:20.762943  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:11:20.762974  389201 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20045-381016/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-381016/.minikube}
	I1204 23:11:20.763024  389201 ubuntu.go:177] setting up certificates
	I1204 23:11:20.763037  389201 provision.go:84] configureAuth start
	I1204 23:11:20.763097  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:20.780798  389201 provision.go:143] copyHostCerts
	I1204 23:11:20.780875  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/cert.pem (1123 bytes)
	I1204 23:11:20.780993  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/key.pem (1679 bytes)
	I1204 23:11:20.781063  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/ca.pem (1082 bytes)
	I1204 23:11:20.781117  389201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem org=jenkins.addons-630093 san=[127.0.0.1 192.168.49.2 addons-630093 localhost minikube]
	I1204 23:11:20.868299  389201 provision.go:177] copyRemoteCerts
	I1204 23:11:20.868362  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:11:20.868401  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.885888  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:20.979351  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:11:21.002115  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:11:21.025135  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 23:11:21.048097  389201 provision.go:87] duration metric: took 285.042631ms to configureAuth
	I1204 23:11:21.048133  389201 ubuntu.go:193] setting minikube options for container-runtime
	I1204 23:11:21.048329  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:21.048491  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.065589  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:21.065803  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:21.065829  389201 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:11:21.286767  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:11:21.286801  389201 machine.go:96] duration metric: took 3.972682372s to provisionDockerMachine
	I1204 23:11:21.286818  389201 client.go:171] duration metric: took 13.037716692s to LocalClient.Create
	I1204 23:11:21.286846  389201 start.go:167] duration metric: took 13.037808895s to libmachine.API.Create "addons-630093"
	I1204 23:11:21.286858  389201 start.go:293] postStartSetup for "addons-630093" (driver="docker")
	I1204 23:11:21.286873  389201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:11:21.286987  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:11:21.287090  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.304282  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.395931  389201 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:11:21.399160  389201 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1204 23:11:21.399199  389201 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1204 23:11:21.399213  389201 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1204 23:11:21.399225  389201 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1204 23:11:21.399238  389201 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-381016/.minikube/addons for local assets ...
	I1204 23:11:21.399311  389201 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-381016/.minikube/files for local assets ...
	I1204 23:11:21.399355  389201 start.go:296] duration metric: took 112.489476ms for postStartSetup
	I1204 23:11:21.399706  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:21.416048  389201 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json ...
	I1204 23:11:21.416313  389201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:11:21.416373  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.433021  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.523629  389201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1204 23:11:21.527955  389201 start.go:128] duration metric: took 13.445851769s to createHost
	I1204 23:11:21.527994  389201 start.go:83] releasing machines lock for "addons-630093", held for 13.446010021s
	I1204 23:11:21.528078  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:21.544604  389201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:11:21.544635  389201 ssh_runner.go:195] Run: cat /version.json
	I1204 23:11:21.544698  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.544711  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.562063  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.563107  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.726911  389201 ssh_runner.go:195] Run: systemctl --version
	I1204 23:11:21.731218  389201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:11:21.869255  389201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 23:11:21.873644  389201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:11:21.892231  389201 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1204 23:11:21.892324  389201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:11:21.918534  389201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1204 23:11:21.918567  389201 start.go:495] detecting cgroup driver to use...
	I1204 23:11:21.918609  389201 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1204 23:11:21.918738  389201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:11:21.932783  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:11:21.942996  389201 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:11:21.943047  389201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:11:21.955543  389201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:11:21.968274  389201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:11:22.038339  389201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:11:22.105989  389201 docker.go:233] disabling docker service ...
	I1204 23:11:22.106057  389201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:11:22.125303  389201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:11:22.136595  389201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:11:22.222266  389201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:11:22.302782  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:11:22.313850  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:11:22.329072  389201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:11:22.329153  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.338774  389201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:11:22.338845  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.348617  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.358293  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.368200  389201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:11:22.377304  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.386913  389201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.402803  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.412320  389201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:11:22.420685  389201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:11:22.428658  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:22.500255  389201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:11:22.610956  389201 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:11:22.611044  389201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:11:22.614513  389201 start.go:563] Will wait 60s for crictl version
	I1204 23:11:22.614575  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:11:22.617917  389201 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:11:22.653283  389201 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1204 23:11:22.653370  389201 ssh_runner.go:195] Run: crio --version
	I1204 23:11:22.690618  389201 ssh_runner.go:195] Run: crio --version
	I1204 23:11:22.727703  389201 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1204 23:11:22.729320  389201 cli_runner.go:164] Run: docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:11:22.746518  389201 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1204 23:11:22.750432  389201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:11:22.761195  389201 kubeadm.go:883] updating cluster {Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:11:22.761320  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:22.761379  389201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:11:22.829323  389201 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:11:22.829348  389201 crio.go:433] Images already preloaded, skipping extraction
	I1204 23:11:22.829393  389201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:11:22.862169  389201 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:11:22.862194  389201 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:11:22.862203  389201 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1204 23:11:22.862323  389201 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-630093 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:11:22.862387  389201 ssh_runner.go:195] Run: crio config
	I1204 23:11:22.906710  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:11:22.906743  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:11:22.906760  389201 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:11:22.906791  389201 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-630093 NodeName:addons-630093 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:11:22.906954  389201 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-630093"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:11:22.907084  389201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:11:22.916048  389201 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:11:22.916128  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 23:11:22.924791  389201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1204 23:11:22.942166  389201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:11:22.959356  389201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1204 23:11:22.976793  389201 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1204 23:11:22.980197  389201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:11:22.990601  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:23.062561  389201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:11:23.075015  389201 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093 for IP: 192.168.49.2
	I1204 23:11:23.075040  389201 certs.go:194] generating shared ca certs ...
	I1204 23:11:23.075059  389201 certs.go:226] acquiring lock for ca certs: {Name:mk50fab2a60ec4c58718c6f0f51391a1fd27b49a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.075181  389201 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key
	I1204 23:11:23.204545  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt ...
	I1204 23:11:23.204578  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt: {Name:mkc915739630db1af592b52d8db74ccdd723c7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.204795  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key ...
	I1204 23:11:23.204810  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key: {Name:mk98e76db05ffadd20917a2d52b7c5260ba39b61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.204916  389201 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key
	I1204 23:11:23.290846  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt ...
	I1204 23:11:23.290885  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt: {Name:mkde85a69cd8a6277fae67df41cc397c773bd1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.291129  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key ...
	I1204 23:11:23.291148  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key: {Name:mk4d177cf9edd13c7ad0b568d9086767e339e8d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.291277  389201 certs.go:256] generating profile certs ...
	I1204 23:11:23.291366  389201 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key
	I1204 23:11:23.291400  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt with IP's: []
	I1204 23:11:23.499855  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt ...
	I1204 23:11:23.499895  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: {Name:mk9311f602c7b1a2b44c19176448b2aa4b32b7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.500105  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key ...
	I1204 23:11:23.500123  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key: {Name:mk9ddfb2303f17ccf88a6e5b8c00cffba1cd1a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.500223  389201 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548
	I1204 23:11:23.500249  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1204 23:11:23.788463  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 ...
	I1204 23:11:23.788500  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548: {Name:mk43ba65c92ad4331db8d9847c5ef32165302741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.788694  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548 ...
	I1204 23:11:23.788714  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548: {Name:mkaced9e8196936ffe141d4dc3e6adda91a33533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.788818  389201 certs.go:381] copying /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 -> /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt
	I1204 23:11:23.788916  389201 certs.go:385] copying /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548 -> /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key
	I1204 23:11:23.788997  389201 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key
	I1204 23:11:23.789023  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt with IP's: []
	I1204 23:11:24.148068  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt ...
	I1204 23:11:24.148104  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt: {Name:mk0ee13602067d1cc858c9534a9707d295b361ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:24.148309  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key ...
	I1204 23:11:24.148327  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key: {Name:mk0ba88937bb7ca6e51a8cf0c8d2ef8507f0374f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:24.148532  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 23:11:24.148585  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:11:24.148628  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:11:24.148673  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem (1679 bytes)
	I1204 23:11:24.149367  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:11:24.173224  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:11:24.196229  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:11:24.219088  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:11:24.242335  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 23:11:24.265632  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:11:24.288555  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:11:24.311820  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 23:11:24.334208  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:11:24.356395  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:11:24.373538  389201 ssh_runner.go:195] Run: openssl version
	I1204 23:11:24.378816  389201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:11:24.388861  389201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.392560  389201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:11 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.392635  389201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.399222  389201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:11:24.408373  389201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:11:24.411765  389201 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:11:24.411828  389201 kubeadm.go:392] StartCluster: {Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:11:24.411930  389201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:11:24.412006  389201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:11:24.445620  389201 cri.go:89] found id: ""
	I1204 23:11:24.445692  389201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:11:24.454281  389201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:11:24.462658  389201 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1204 23:11:24.462715  389201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:11:24.471058  389201 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:11:24.471082  389201 kubeadm.go:157] found existing configuration files:
	
	I1204 23:11:24.471133  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:11:24.479379  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:11:24.479446  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:11:24.488299  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:11:24.496565  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:11:24.496635  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:11:24.505412  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:11:24.514190  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:11:24.514243  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:11:24.522477  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:11:24.531365  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:11:24.531421  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:11:24.539416  389201 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1204 23:11:24.592567  389201 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1204 23:11:24.645179  389201 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:11:33.426336  389201 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:11:33.426437  389201 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:11:33.426522  389201 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1204 23:11:33.426572  389201 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1204 23:11:33.426602  389201 kubeadm.go:310] OS: Linux
	I1204 23:11:33.426679  389201 kubeadm.go:310] CGROUPS_CPU: enabled
	I1204 23:11:33.426720  389201 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1204 23:11:33.426798  389201 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1204 23:11:33.426877  389201 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1204 23:11:33.426958  389201 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1204 23:11:33.427034  389201 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1204 23:11:33.427111  389201 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1204 23:11:33.427182  389201 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1204 23:11:33.427243  389201 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1204 23:11:33.427304  389201 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:11:33.427436  389201 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:11:33.427575  389201 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:11:33.427676  389201 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:11:33.429670  389201 out.go:235]   - Generating certificates and keys ...
	I1204 23:11:33.429776  389201 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:11:33.429879  389201 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:11:33.429944  389201 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:11:33.429996  389201 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:11:33.430058  389201 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:11:33.430106  389201 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:11:33.430157  389201 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:11:33.430253  389201 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-630093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1204 23:11:33.430323  389201 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:11:33.430455  389201 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-630093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1204 23:11:33.430550  389201 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:11:33.430624  389201 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:11:33.430694  389201 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:11:33.430742  389201 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:11:33.430787  389201 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:11:33.430873  389201 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:11:33.430954  389201 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:11:33.431013  389201 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:11:33.431063  389201 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:11:33.431131  389201 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:11:33.431189  389201 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:11:33.432586  389201 out.go:235]   - Booting up control plane ...
	I1204 23:11:33.432667  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:11:33.432728  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:11:33.432786  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:11:33.432889  389201 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:11:33.433004  389201 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:11:33.433088  389201 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:11:33.433245  389201 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:11:33.433395  389201 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:11:33.433490  389201 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.66305ms
	I1204 23:11:33.433586  389201 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:11:33.433659  389201 kubeadm.go:310] [api-check] The API server is healthy after 4.001728957s
	I1204 23:11:33.433784  389201 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:11:33.433892  389201 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:11:33.433961  389201 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:11:33.434106  389201 kubeadm.go:310] [mark-control-plane] Marking the node addons-630093 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:11:33.434165  389201 kubeadm.go:310] [bootstrap-token] Using token: 6qxarj.88k5pjf3ytyfzen4
	I1204 23:11:33.435845  389201 out.go:235]   - Configuring RBAC rules ...
	I1204 23:11:33.435945  389201 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:11:33.436019  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:11:33.436136  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:11:33.436246  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:11:33.436351  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:11:33.436423  389201 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:11:33.436515  389201 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:11:33.436552  389201 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:11:33.436626  389201 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:11:33.436642  389201 kubeadm.go:310] 
	I1204 23:11:33.436722  389201 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:11:33.436737  389201 kubeadm.go:310] 
	I1204 23:11:33.436836  389201 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:11:33.436844  389201 kubeadm.go:310] 
	I1204 23:11:33.436864  389201 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:11:33.436913  389201 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:11:33.436961  389201 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:11:33.436967  389201 kubeadm.go:310] 
	I1204 23:11:33.437008  389201 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:11:33.437016  389201 kubeadm.go:310] 
	I1204 23:11:33.437056  389201 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:11:33.437062  389201 kubeadm.go:310] 
	I1204 23:11:33.437107  389201 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:11:33.437170  389201 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:11:33.437258  389201 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:11:33.437274  389201 kubeadm.go:310] 
	I1204 23:11:33.437411  389201 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:11:33.437541  389201 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:11:33.437553  389201 kubeadm.go:310] 
	I1204 23:11:33.437672  389201 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6qxarj.88k5pjf3ytyfzen4 \
	I1204 23:11:33.437797  389201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e2721502eca5fe8af4d77f137e4406b90f31d1565f7dd87db91cf7b9fa1e9057 \
	I1204 23:11:33.437833  389201 kubeadm.go:310] 	--control-plane 
	I1204 23:11:33.437842  389201 kubeadm.go:310] 
	I1204 23:11:33.437945  389201 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:11:33.437954  389201 kubeadm.go:310] 
	I1204 23:11:33.438055  389201 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6qxarj.88k5pjf3ytyfzen4 \
	I1204 23:11:33.438195  389201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e2721502eca5fe8af4d77f137e4406b90f31d1565f7dd87db91cf7b9fa1e9057 
	I1204 23:11:33.438211  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:11:33.438221  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:11:33.439987  389201 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 23:11:33.441251  389201 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 23:11:33.445237  389201 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 23:11:33.445258  389201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 23:11:33.462279  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 23:11:33.665861  389201 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:11:33.665944  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:33.665972  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-630093 minikube.k8s.io/updated_at=2024_12_04T23_11_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=addons-630093 minikube.k8s.io/primary=true
	I1204 23:11:33.673805  389201 ops.go:34] apiserver oom_adj: -16
	I1204 23:11:33.756672  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:34.256804  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:34.757586  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:35.256809  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:35.757274  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:36.256932  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:36.757774  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:37.257415  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:37.756756  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:38.256823  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:38.333806  389201 kubeadm.go:1113] duration metric: took 4.667934536s to wait for elevateKubeSystemPrivileges
	I1204 23:11:38.333851  389201 kubeadm.go:394] duration metric: took 13.922029737s to StartCluster
	I1204 23:11:38.333875  389201 settings.go:142] acquiring lock: {Name:mke2b5bd7468e0e3a170be0f2243b433cdca2b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:38.334020  389201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:11:38.334556  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/kubeconfig: {Name:mk53a4e908644f8dfb244bee65db94736a5dc52e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:38.334826  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:11:38.334847  389201 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:38.334940  389201 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1204 23:11:38.335050  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:38.335067  389201 addons.go:69] Setting yakd=true in profile "addons-630093"
	I1204 23:11:38.335086  389201 addons.go:234] Setting addon yakd=true in "addons-630093"
	I1204 23:11:38.335088  389201 addons.go:69] Setting inspektor-gadget=true in profile "addons-630093"
	I1204 23:11:38.335099  389201 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-630093"
	I1204 23:11:38.335108  389201 addons.go:69] Setting gcp-auth=true in profile "addons-630093"
	I1204 23:11:38.335116  389201 addons.go:234] Setting addon inspektor-gadget=true in "addons-630093"
	I1204 23:11:38.335118  389201 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-630093"
	I1204 23:11:38.335126  389201 mustload.go:65] Loading cluster: addons-630093
	I1204 23:11:38.335120  389201 addons.go:69] Setting storage-provisioner=true in profile "addons-630093"
	I1204 23:11:38.335142  389201 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-630093"
	I1204 23:11:38.335151  389201 addons.go:234] Setting addon storage-provisioner=true in "addons-630093"
	I1204 23:11:38.335142  389201 addons.go:69] Setting ingress=true in profile "addons-630093"
	I1204 23:11:38.335165  389201 addons.go:69] Setting ingress-dns=true in profile "addons-630093"
	I1204 23:11:38.335168  389201 addons.go:234] Setting addon ingress=true in "addons-630093"
	I1204 23:11:38.335170  389201 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-630093"
	I1204 23:11:38.335177  389201 addons.go:234] Setting addon ingress-dns=true in "addons-630093"
	I1204 23:11:38.335175  389201 addons.go:69] Setting metrics-server=true in profile "addons-630093"
	I1204 23:11:38.335186  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335187  389201 addons.go:234] Setting addon metrics-server=true in "addons-630093"
	I1204 23:11:38.335201  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335205  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335251  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335270  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:38.335598  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335639  389201 addons.go:69] Setting registry=true in profile "addons-630093"
	I1204 23:11:38.335664  389201 addons.go:234] Setting addon registry=true in "addons-630093"
	I1204 23:11:38.335690  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335770  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335788  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335788  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335799  389201 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-630093"
	I1204 23:11:38.335865  389201 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-630093"
	I1204 23:11:38.335890  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.336127  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.336356  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335154  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335131  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.337395  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335166  389201 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-630093"
	I1204 23:11:38.337522  389201 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-630093"
	I1204 23:11:38.335779  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.337583  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335154  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335618  389201 addons.go:69] Setting volcano=true in profile "addons-630093"
	I1204 23:11:38.337980  389201 addons.go:234] Setting addon volcano=true in "addons-630093"
	I1204 23:11:38.338050  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.338346  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.338511  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.338659  389201 out.go:177] * Verifying Kubernetes components...
	I1204 23:11:38.338743  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335079  389201 addons.go:69] Setting cloud-spanner=true in profile "addons-630093"
	I1204 23:11:38.339343  389201 addons.go:234] Setting addon cloud-spanner=true in "addons-630093"
	I1204 23:11:38.339416  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.342329  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.343246  389201 addons.go:69] Setting default-storageclass=true in profile "addons-630093"
	I1204 23:11:38.343284  389201 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-630093"
	I1204 23:11:38.343690  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.343795  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:38.335605  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335627  389201 addons.go:69] Setting volumesnapshots=true in profile "addons-630093"
	I1204 23:11:38.344127  389201 addons.go:234] Setting addon volumesnapshots=true in "addons-630093"
	I1204 23:11:38.344187  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.369102  389201 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1204 23:11:38.370392  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 23:11:38.370441  389201 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 23:11:38.370514  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.375367  389201 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1204 23:11:38.376764  389201 out.go:177]   - Using image docker.io/registry:2.8.3
	I1204 23:11:38.378315  389201 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1204 23:11:38.378339  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1204 23:11:38.378415  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.387789  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.390443  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.396264  389201 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1204 23:11:38.397739  389201 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:11:38.397765  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1204 23:11:38.397836  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.403885  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1204 23:11:38.404091  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.406664  389201 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1204 23:11:38.407794  389201 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1204 23:11:38.409084  389201 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:11:38.413429  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1204 23:11:38.413459  389201 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1204 23:11:38.413462  389201 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1204 23:11:38.413531  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.413533  389201 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:11:38.413544  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1204 23:11:38.413597  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.413711  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1204 23:11:38.413833  389201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:11:38.413845  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:11:38.413897  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.414878  389201 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:11:38.414894  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1204 23:11:38.414957  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.416261  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1204 23:11:38.418117  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1204 23:11:38.419304  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1204 23:11:38.420751  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1204 23:11:38.422006  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1204 23:11:38.423748  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1204 23:11:38.424837  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1204 23:11:38.424860  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1204 23:11:38.424941  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.430181  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1204 23:11:38.434134  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1204 23:11:38.434699  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:38.435845  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1204 23:11:38.435868  389201 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1204 23:11:38.435951  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.438678  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:38.444191  389201 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:11:38.444221  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1204 23:11:38.444288  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.451026  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.452847  389201 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1204 23:11:38.454187  389201 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1204 23:11:38.454245  389201 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1204 23:11:38.454263  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1204 23:11:38.454326  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.455564  389201 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1204 23:11:38.455600  389201 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1204 23:11:38.455669  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	W1204 23:11:38.458222  389201 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1204 23:11:38.462209  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.470069  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.470586  389201 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-630093"
	I1204 23:11:38.470686  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.471216  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.473482  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.476209  389201 addons.go:234] Setting addon default-storageclass=true in "addons-630093"
	I1204 23:11:38.476266  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.476733  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.477420  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.486737  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.488076  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.494091  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.494760  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.500157  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.514409  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.517053  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.526764  389201 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1204 23:11:38.528218  389201 out.go:177]   - Using image docker.io/busybox:stable
	I1204 23:11:38.529542  389201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:11:38.529568  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1204 23:11:38.529635  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.532873  389201 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:11:38.532892  389201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:11:38.532949  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.547794  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.550902  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.714491  389201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:11:38.714590  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:11:38.730697  389201 node_ready.go:35] waiting up to 6m0s for node "addons-630093" to be "Ready" ...
	I1204 23:11:38.896083  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 23:11:38.896129  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1204 23:11:38.902650  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:11:38.903274  389201 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1204 23:11:38.903334  389201 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1204 23:11:38.908154  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:11:38.995367  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:11:38.996682  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:11:39.003953  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1204 23:11:39.003987  389201 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1204 23:11:39.009058  389201 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:11:39.009092  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1204 23:11:39.011952  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:11:39.015960  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1204 23:11:39.015992  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1204 23:11:39.095325  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1204 23:11:39.099215  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:11:39.107754  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 23:11:39.107787  389201 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 23:11:39.111656  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:11:39.199729  389201 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:11:39.199775  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1204 23:11:39.206060  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1204 23:11:39.206157  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1204 23:11:39.207660  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:11:39.313681  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:11:39.313712  389201 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 23:11:39.315754  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1204 23:11:39.315836  389201 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1204 23:11:39.402197  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1204 23:11:39.402298  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1204 23:11:39.497285  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:11:39.613001  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:11:39.795499  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1204 23:11:39.795537  389201 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1204 23:11:39.908631  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1204 23:11:39.908730  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1204 23:11:40.110384  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1204 23:11:40.110490  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1204 23:11:40.203583  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1204 23:11:40.203684  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1204 23:11:40.302900  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:11:40.302989  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1204 23:11:40.305736  389201 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.591107897s)
	I1204 23:11:40.305865  389201 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1204 23:11:40.415986  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.513233503s)
	I1204 23:11:40.516873  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1204 23:11:40.516909  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1204 23:11:40.606740  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1204 23:11:40.606836  389201 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1204 23:11:40.706038  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:11:41.013840  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.105639169s)
	I1204 23:11:41.019324  389201 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-630093" context rescaled to 1 replicas
	I1204 23:11:41.019970  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:41.098870  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1204 23:11:41.098907  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1204 23:11:41.103755  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.108338868s)
	I1204 23:11:41.296521  389201 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:41.296620  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1204 23:11:41.604186  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1204 23:11:41.604271  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1204 23:11:41.711584  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:41.895283  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1204 23:11:41.895375  389201 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1204 23:11:42.005218  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1204 23:11:42.005322  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1204 23:11:42.196571  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1204 23:11:42.196687  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1204 23:11:42.209452  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.212725161s)
	I1204 23:11:42.322610  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:11:42.322752  389201 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1204 23:11:42.502862  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:11:42.809979  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.797973312s)
	I1204 23:11:42.810142  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.714779141s)
	I1204 23:11:43.015142  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.91582183s)
	I1204 23:11:43.300319  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:44.520283  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.40857896s)
	I1204 23:11:44.520372  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.02299016s)
	I1204 23:11:44.520392  389201 addons.go:475] Verifying addon ingress=true in "addons-630093"
	I1204 23:11:44.520419  389201 addons.go:475] Verifying addon registry=true in "addons-630093"
	I1204 23:11:44.520330  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.312579258s)
	I1204 23:11:44.520780  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.814712029s)
	I1204 23:11:44.520741  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.907702215s)
	I1204 23:11:44.521986  389201 addons.go:475] Verifying addon metrics-server=true in "addons-630093"
	I1204 23:11:44.522358  389201 out.go:177] * Verifying ingress addon...
	I1204 23:11:44.522391  389201 out.go:177] * Verifying registry addon...
	I1204 23:11:44.523305  389201 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-630093 service yakd-dashboard -n yakd-dashboard
	
	I1204 23:11:44.525119  389201 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1204 23:11:44.525119  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1204 23:11:44.600633  389201 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:11:44.600664  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:44.600855  389201 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1204 23:11:44.600872  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.030335  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:45.031111  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.524701  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.813019436s)
	W1204 23:11:45.524761  389201 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:11:45.524790  389201 retry.go:31] will retry after 181.865687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:11:45.529400  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:45.529925  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.620284  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1204 23:11:45.620363  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:45.640586  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:45.707473  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:45.802964  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:45.916555  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1204 23:11:45.999202  389201 addons.go:234] Setting addon gcp-auth=true in "addons-630093"
	I1204 23:11:45.999264  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:45.999784  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:46.028530  389201 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1204 23:11:46.028595  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:46.031316  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:46.031818  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:46.049437  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:46.408520  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.905505829s)
	I1204 23:11:46.408572  389201 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-630093"
	I1204 23:11:46.410390  389201 out.go:177] * Verifying csi-hostpath-driver addon...
	I1204 23:11:46.413226  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1204 23:11:46.423132  389201 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:11:46.423158  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:46.530521  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:46.530917  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:46.918004  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:47.028913  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:47.029388  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:47.417466  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:47.531801  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:47.532309  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:47.916654  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:48.028517  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:48.029048  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:48.236314  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:48.416588  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:48.528958  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:48.529570  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:48.735256  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.027721867s)
	I1204 23:11:48.735290  389201 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.706722291s)
	I1204 23:11:48.737269  389201 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1204 23:11:48.738737  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:48.739945  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1204 23:11:48.739962  389201 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1204 23:11:48.757606  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1204 23:11:48.757640  389201 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1204 23:11:48.774462  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:11:48.774491  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1204 23:11:48.791359  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:11:48.917479  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:49.028378  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:49.028791  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:49.119035  389201 addons.go:475] Verifying addon gcp-auth=true in "addons-630093"
	I1204 23:11:49.120662  389201 out.go:177] * Verifying gcp-auth addon...
	I1204 23:11:49.123168  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1204 23:11:49.127558  389201 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1204 23:11:49.127594  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:49.417311  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:49.529241  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:49.529771  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:49.626790  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:49.917626  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:50.028348  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:50.028726  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:50.128054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:50.417233  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:50.529158  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:50.529580  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:50.627050  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:50.734676  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:50.917259  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:51.029147  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:51.029767  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:51.126874  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:51.417238  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:51.529239  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:51.529661  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:51.627160  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:51.916950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:52.028762  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:52.029207  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:52.127128  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:52.417313  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:52.529136  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:52.529632  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:52.626885  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:52.917040  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:53.028643  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:53.029069  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:53.126271  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:53.233877  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:53.417285  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:53.529030  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:53.529451  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:53.626877  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:53.917489  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:54.029327  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:54.029771  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:54.127217  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:54.416734  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:54.528697  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:54.529051  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:54.626826  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:54.916888  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:55.028438  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:55.028959  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:55.126396  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:55.234291  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:55.417202  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:55.528962  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:55.529441  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:55.626790  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:55.917367  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:56.028910  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:56.029339  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:56.127003  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:56.416550  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:56.528268  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:56.528637  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:56.626903  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:56.917742  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:57.028644  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:57.029259  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:57.126655  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:57.417402  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:57.528943  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:57.529266  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:57.626610  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:57.802859  389201 node_ready.go:49] node "addons-630093" has status "Ready":"True"
	I1204 23:11:57.802968  389201 node_ready.go:38] duration metric: took 19.072220894s for node "addons-630093" to be "Ready" ...
	I1204 23:11:57.803001  389201 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:11:57.812284  389201 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace to be "Ready" ...
	I1204 23:11:57.918256  389201 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:11:57.918288  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:58.028987  389201 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:11:58.029025  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:58.029163  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:58.128052  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:58.418190  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:58.529517  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:58.529923  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:58.627312  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:58.919346  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:59.029950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:59.030369  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:59.127570  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:59.418251  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:59.530785  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:59.531584  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:59.630759  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:59.818327  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:11:59.918676  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:00.030531  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:00.030960  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:00.127203  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:00.418498  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:00.529214  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:00.529347  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:00.626705  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:00.919036  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:01.029541  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:01.029735  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:01.127079  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:01.417804  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:01.529706  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:01.530306  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:01.626425  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:01.818875  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:01.918913  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:02.029895  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:02.030382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:02.127260  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:02.423666  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:02.529870  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:02.530595  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:02.627705  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:02.918184  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:03.096822  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:03.098279  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:03.126704  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:03.418293  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:03.530189  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:03.531307  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:03.626994  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:03.819175  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:03.919019  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:04.029490  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:04.030689  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:04.127527  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:04.418611  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:04.529829  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:04.530049  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:04.627138  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:04.918884  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:05.029547  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:05.030544  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:05.127501  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:05.418586  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:05.529727  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:05.530098  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:05.629968  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:05.819250  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:05.917895  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:06.030341  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:06.030532  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:06.130159  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:06.417534  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:06.529640  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:06.529905  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:06.626512  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:06.918521  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:07.029270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:07.029688  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:07.127053  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:07.417502  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:07.529692  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:07.530328  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:07.629361  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:07.917534  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:08.029222  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:08.029469  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:08.127082  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:08.319034  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:08.419261  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:08.529942  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:08.530672  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:08.627267  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:08.917968  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:09.029951  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:09.030163  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:09.126878  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:09.418269  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:09.529306  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:09.529537  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:09.627199  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:09.918335  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:10.029495  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:10.029837  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:10.127443  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:10.319436  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:10.418755  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:10.529622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:10.529807  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:10.626252  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:10.917779  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:11.030059  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:11.030182  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:11.127180  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:11.419556  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:11.530723  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:11.531122  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:11.626618  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:11.918234  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:12.029550  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:12.029678  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:12.127740  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:12.418986  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:12.530019  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:12.530137  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:12.630114  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:12.819093  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:12.918200  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:13.029270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:13.029507  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:13.127361  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:13.418296  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:13.528977  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:13.529560  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:13.629701  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:13.918107  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:14.028623  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:14.029060  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:14.126995  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:14.417833  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:14.601066  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:14.601685  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:14.700398  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:14.819539  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:14.918753  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:15.029149  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:15.029311  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:15.127355  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:15.417956  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:15.530046  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:15.530173  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:15.626804  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:15.817465  389201 pod_ready.go:93] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.817493  389201 pod_ready.go:82] duration metric: took 18.005165509s for pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.817504  389201 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.822063  389201 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.822085  389201 pod_ready.go:82] duration metric: took 4.574786ms for pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.822105  389201 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.826436  389201 pod_ready.go:93] pod "etcd-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.826459  389201 pod_ready.go:82] duration metric: took 4.348229ms for pod "etcd-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.826472  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.831213  389201 pod_ready.go:93] pod "kube-apiserver-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.831241  389201 pod_ready.go:82] duration metric: took 4.762165ms for pod "kube-apiserver-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.831254  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.835452  389201 pod_ready.go:93] pod "kube-controller-manager-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.835474  389201 pod_ready.go:82] duration metric: took 4.212413ms for pod "kube-controller-manager-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.835486  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9l4p" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.918128  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:16.028729  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:16.029367  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:16.127315  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:16.216237  389201 pod_ready.go:93] pod "kube-proxy-k9l4p" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:16.216263  389201 pod_ready.go:82] duration metric: took 380.769812ms for pod "kube-proxy-k9l4p" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.216274  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.417739  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:16.529747  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:16.530393  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:16.615744  389201 pod_ready.go:93] pod "kube-scheduler-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:16.615777  389201 pod_ready.go:82] duration metric: took 399.4948ms for pod "kube-scheduler-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.615792  389201 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.629644  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:16.918480  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:17.029640  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:17.030079  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:17.127575  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:17.418114  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:17.528932  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:17.530075  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:17.704033  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:17.998609  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:18.099865  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:18.100201  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:18.197667  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:18.418883  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:18.599572  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:18.600671  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:18.701570  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:18.703573  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:18.920015  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:19.100730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:19.102395  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:19.198834  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:19.418509  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:19.529727  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:19.530383  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:19.626273  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:19.918805  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:20.029240  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:20.029932  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:20.126903  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:20.418249  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:20.529801  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:20.530308  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:20.626097  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:20.918878  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:21.029289  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:21.029519  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:21.122606  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:21.126039  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:21.418484  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:21.529710  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:21.530710  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:21.626146  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:21.918962  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:22.029458  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:22.029740  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:22.127214  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:22.419474  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:22.530071  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:22.530666  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:22.626757  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:22.919558  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:23.030183  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:23.030603  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:23.126737  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:23.419160  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:23.530176  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:23.530357  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:23.622846  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:23.626203  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:23.918700  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:24.028728  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:24.028982  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:24.126654  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:24.417980  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:24.530135  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:24.531100  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:24.627054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:24.918427  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:25.028887  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:25.029218  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:25.126097  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:25.418781  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:25.529648  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:25.529792  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:25.625375  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:25.918175  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:26.029449  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:26.029717  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:26.121949  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:26.125965  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:26.418478  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:26.529251  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:26.529458  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:26.626865  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:26.918569  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:27.029067  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:27.030277  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:27.125626  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:27.418385  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:27.528662  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:27.529405  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:27.628474  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:27.917874  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:28.029704  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:28.029928  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:28.122056  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:28.126396  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:28.419714  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:28.529079  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:28.529300  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:28.628622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:28.918659  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:29.028740  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:29.029352  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:29.126050  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:29.417959  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:29.529472  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:29.530620  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:29.629092  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:29.919400  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:30.030302  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:30.030514  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:30.122668  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:30.126280  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:30.418540  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:30.529288  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:30.529642  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:30.626549  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:30.918094  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:31.028726  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:31.029185  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:31.127032  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:31.418917  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:31.529225  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:31.529895  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:31.626376  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:31.917674  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:32.029127  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:32.029446  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:32.126980  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:32.418178  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:32.529226  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:32.529801  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:32.622787  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:32.629901  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:32.918843  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:33.029651  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:33.029732  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:33.126752  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:33.417866  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:33.529615  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:33.529803  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:33.626861  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:33.918296  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:34.029295  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:34.029827  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:34.126281  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:34.418699  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:34.529505  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:34.529651  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:34.642845  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.016246  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:35.029633  389201 kapi.go:107] duration metric: took 50.504509788s to wait for kubernetes.io/minikube-addons=registry ...
	I1204 23:12:35.030572  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:35.122008  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:35.126344  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.418953  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:35.529492  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:35.629301  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.917990  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:36.029160  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:36.126923  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:36.418071  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:36.530620  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:36.626415  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:36.918072  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:37.030355  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:37.122395  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:37.130220  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:37.418413  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:37.528927  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:37.625990  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:37.918227  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:38.029187  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:38.126369  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:38.417932  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:38.598800  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:38.697192  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:38.919507  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:39.029934  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:39.126608  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:39.417800  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:39.529782  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:39.621784  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:39.626154  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:39.918849  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:40.030159  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:40.126095  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:40.418225  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:40.531480  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:40.626066  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:40.922455  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:41.030073  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:41.132353  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:41.419213  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:41.530198  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:41.623990  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:41.626185  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:41.918285  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:42.029080  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:42.126525  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:42.417894  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:42.530073  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:42.628888  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:42.917931  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:43.029806  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:43.129456  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:43.417942  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:43.530219  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:43.626382  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:43.919862  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:44.030101  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:44.121891  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:44.126376  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:44.418428  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:44.529385  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:44.626961  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:44.918331  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:45.029815  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.130119  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:45.418987  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:45.530112  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.626679  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:45.917695  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.030308  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.122743  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:46.125898  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:46.418369  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.530377  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.626026  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:46.919590  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.029382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.126945  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:47.418103  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.529610  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.626586  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:47.918784  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.030793  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.123333  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:48.125995  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.418085  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.529161  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.625851  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.918833  389201 kapi.go:107] duration metric: took 1m2.505604843s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1204 23:12:49.029518  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.126520  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:49.529429  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.626178  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.028779  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.126359  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.529535  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.621344  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:50.626657  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.029711  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.126167  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.528977  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.625730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.029401  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.126687  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.529779  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.622444  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:52.626730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.029789  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.125660  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.529648  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.625950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.029567  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.126564  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.529619  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.626519  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.029917  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.121799  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:55.125909  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.530199  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.626324  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.029734  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.125940  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.529705  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.626054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.072272  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.122241  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:57.126623  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.529316  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.626270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.029340  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.126509  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.529559  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.626455  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.029135  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.126845  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.529933  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.621754  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:59.625881  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.029773  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.126622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.529528  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.626582  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.029576  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.127058  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.530191  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.622552  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:01.626939  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.030598  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.130438  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.529743  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.626141  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.030953  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.149927  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.529333  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.622858  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:03.626677  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:04.029338  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:04.128963  389201 kapi.go:107] duration metric: took 1m15.005791002s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1204 23:13:04.130952  389201 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-630093 cluster.
	I1204 23:13:04.132630  389201 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1204 23:13:04.134066  389201 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1204 23:13:04.599921  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.100341  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.599382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.623902  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:06.029904  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:06.529164  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.029826  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.531039  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.030122  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.123005  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:08.529214  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.029839  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.529349  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:10.030137  389201 kapi.go:107] duration metric: took 1m25.505015693s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1204 23:13:10.032415  389201 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1204 23:13:10.034021  389201 addons.go:510] duration metric: took 1m31.699072904s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1204 23:13:10.622508  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:13.121894  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:15.622516  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:18.122616  389201 pod_ready.go:93] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:18.122655  389201 pod_ready.go:82] duration metric: took 1m1.506852695s for pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.122671  389201 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.127666  389201 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:18.127689  389201 pod_ready.go:82] duration metric: took 5.009056ms for pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.127712  389201 pod_ready.go:39] duration metric: took 1m20.324660399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:13:18.127736  389201 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:13:18.127773  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:18.127852  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:18.163496  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:18.163523  389201 cri.go:89] found id: ""
	I1204 23:13:18.163535  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:18.163604  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.167359  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:18.167448  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:18.204556  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:18.204586  389201 cri.go:89] found id: ""
	I1204 23:13:18.204598  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:18.204666  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.208385  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:18.208480  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:18.243732  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:18.243758  389201 cri.go:89] found id: ""
	I1204 23:13:18.243766  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:18.243825  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.247475  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:18.247549  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:18.284446  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:18.284481  389201 cri.go:89] found id: ""
	I1204 23:13:18.284494  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:18.284553  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.288056  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:18.288154  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:18.322998  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:18.323035  389201 cri.go:89] found id: ""
	I1204 23:13:18.323071  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:18.323127  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.326560  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:18.326662  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:18.360672  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:18.360695  389201 cri.go:89] found id: ""
	I1204 23:13:18.360704  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:18.360759  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.364394  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:18.364465  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:18.398753  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:18.398779  389201 cri.go:89] found id: ""
	I1204 23:13:18.398788  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:18.398837  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.402272  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:18.402308  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:18.480499  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:18.480540  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:18.524595  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:18.524634  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:18.566986  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:18.567027  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:18.602070  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:18.602102  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:18.658618  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:18.658684  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:18.696622  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:18.696664  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:18.740640  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:18.740679  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:18.779439  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.779629  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.791512  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.791674  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.791800  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.791953  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792143  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792315  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792450  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792613  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792743  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792901  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.793033  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.793194  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.793332  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.793495  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:18.826225  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:18.826269  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:18.853723  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:18.853768  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:18.956948  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:18.956987  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:19.002234  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:19.002271  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:19.041497  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:19.041531  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:19.041595  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:19.041609  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:19.041619  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:19.041628  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:19.041636  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:19.041642  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:19.041649  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:19.041654  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:29.043089  389201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:13:29.058130  389201 api_server.go:72] duration metric: took 1m50.723247239s to wait for apiserver process to appear ...
	I1204 23:13:29.058169  389201 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:13:29.058217  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:29.058262  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:29.093177  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:29.093208  389201 cri.go:89] found id: ""
	I1204 23:13:29.093217  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:29.093301  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.096893  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:29.096964  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:29.132522  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:29.132544  389201 cri.go:89] found id: ""
	I1204 23:13:29.132554  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:29.132596  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.136114  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:29.136174  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:29.171816  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:29.171839  389201 cri.go:89] found id: ""
	I1204 23:13:29.171850  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:29.171897  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.175512  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:29.175584  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:29.212035  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:29.212060  389201 cri.go:89] found id: ""
	I1204 23:13:29.212069  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:29.212116  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.215601  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:29.215669  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:29.251281  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:29.251304  389201 cri.go:89] found id: ""
	I1204 23:13:29.251312  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:29.251358  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.255228  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:29.255342  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:29.290460  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:29.290486  389201 cri.go:89] found id: ""
	I1204 23:13:29.290496  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:29.290559  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.294114  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:29.294191  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:29.330311  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:29.330336  389201 cri.go:89] found id: ""
	I1204 23:13:29.330346  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:29.330396  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.333992  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:29.334023  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:29.368566  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:29.368596  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:29.402199  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:29.402229  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:29.482290  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:29.482339  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:29.510099  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:29.510142  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:29.615012  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:29.615047  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:29.660921  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:29.660962  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:29.704015  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:29.704060  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:29.747065  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:29.747100  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:29.827553  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.827776  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.839459  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.839672  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.839847  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840075  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.840275  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840505  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.840699  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840936  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.841134  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.841361  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.841560  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.841791  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.842000  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.842238  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:29.875377  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:29.875420  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:29.915909  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:29.915942  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:29.975760  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:29.975799  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:30.020004  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:30.020036  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:30.020104  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:30.020121  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:30.020132  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:30.020149  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:30.020164  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:30.020176  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:30.020187  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:30.020199  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:40.021029  389201 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1204 23:13:40.025015  389201 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1204 23:13:40.026016  389201 api_server.go:141] control plane version: v1.31.2
	I1204 23:13:40.026045  389201 api_server.go:131] duration metric: took 10.967868289s to wait for apiserver health ...
	I1204 23:13:40.026053  389201 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:13:40.026087  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:40.026139  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:40.061619  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:40.061656  389201 cri.go:89] found id: ""
	I1204 23:13:40.061667  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:40.061726  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.065276  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:40.065347  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:40.099762  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:40.099784  389201 cri.go:89] found id: ""
	I1204 23:13:40.099791  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:40.099846  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.103315  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:40.103376  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:40.138517  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:40.138548  389201 cri.go:89] found id: ""
	I1204 23:13:40.138558  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:40.138608  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.142278  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:40.142338  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:40.177139  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:40.177162  389201 cri.go:89] found id: ""
	I1204 23:13:40.177169  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:40.177224  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.180724  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:40.180787  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:40.215881  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:40.215909  389201 cri.go:89] found id: ""
	I1204 23:13:40.215921  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:40.215978  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.219605  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:40.219672  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:40.254791  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:40.254818  389201 cri.go:89] found id: ""
	I1204 23:13:40.254830  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:40.254883  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.258537  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:40.258600  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:40.293449  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:40.293476  389201 cri.go:89] found id: ""
	I1204 23:13:40.293486  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:40.293542  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.297150  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:40.297182  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:40.372794  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:40.372843  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:40.419461  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:40.419498  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:40.534097  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:40.534131  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:40.578901  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:40.578941  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:40.616890  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:40.616923  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:40.676313  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:40.676354  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:40.712137  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:40.712171  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:40.749253  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:40.749283  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:40.793451  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.793680  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805200  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.805392  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805575  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.805790  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805984  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.806212  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.806412  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.806670  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.806884  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807109  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.807303  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807526  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.807722  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807952  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:40.842035  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:40.842083  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:40.868911  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:40.868949  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:40.915327  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:40.915367  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:40.958116  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:40.958151  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:40.958253  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:40.958268  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.958278  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.958294  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.958308  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.958323  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:40.958329  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:40.958338  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:50.969322  389201 system_pods.go:59] 19 kube-system pods found
	I1204 23:13:50.969358  389201 system_pods.go:61] "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
	I1204 23:13:50.969363  389201 system_pods.go:61] "coredns-7c65d6cfc9-nvslc" [e12dda0f-2d10-4096-b12f-73bd871cc18e] Running
	I1204 23:13:50.969368  389201 system_pods.go:61] "csi-hostpath-attacher-0" [af4d7f93-4989-4c1d-8c89-43d0e74f1a44] Running
	I1204 23:13:50.969372  389201 system_pods.go:61] "csi-hostpath-resizer-0" [5198084f-6ce5-4b12-89f8-5d8a76057764] Running
	I1204 23:13:50.969375  389201 system_pods.go:61] "csi-hostpathplugin-97jlr" [1d17a273-85e7-4f77-9bbe-7786a88d0ebe] Running
	I1204 23:13:50.969379  389201 system_pods.go:61] "etcd-addons-630093" [7758ddc9-6dfb-4fe8-a37f-1ef8170cd720] Running
	I1204 23:13:50.969382  389201 system_pods.go:61] "kindnet-sklhp" [a2a719ef-fccf-456e-88ac-b6e5fad34e3e] Running
	I1204 23:13:50.969387  389201 system_pods.go:61] "kube-apiserver-addons-630093" [34402f18-4ebe-4e53-9495-549544e9f70c] Running
	I1204 23:13:50.969393  389201 system_pods.go:61] "kube-controller-manager-addons-630093" [e33f5809-04da-4fb0-8265-2e29e7f90e15] Running
	I1204 23:13:50.969408  389201 system_pods.go:61] "kube-ingress-dns-minikube" [4cda5680-90e6-43e2-b35f-bf0976f6fef3] Running
	I1204 23:13:50.969415  389201 system_pods.go:61] "kube-proxy-k9l4p" [bddbd74f-1a8f-4181-b2f7-decc74059f10] Running
	I1204 23:13:50.969420  389201 system_pods.go:61] "kube-scheduler-addons-630093" [1f496311-6985-4c79-a19a-4ceade68e41e] Running
	I1204 23:13:50.969429  389201 system_pods.go:61] "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
	I1204 23:13:50.969434  389201 system_pods.go:61] "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
	I1204 23:13:50.969441  389201 system_pods.go:61] "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
	I1204 23:13:50.969444  389201 system_pods.go:61] "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
	I1204 23:13:50.969453  389201 system_pods.go:61] "snapshot-controller-56fcc65765-2492d" [a604be0a-c061-4a65-9d32-0b98fff12222] Running
	I1204 23:13:50.969458  389201 system_pods.go:61] "snapshot-controller-56fcc65765-xtclh" [845fd71c-634d-41e2-a101-08a0c1458418] Running
	I1204 23:13:50.969461  389201 system_pods.go:61] "storage-provisioner" [cde6de53-e600-4898-a1c3-df78f4d4e6ff] Running
	I1204 23:13:50.969470  389201 system_pods.go:74] duration metric: took 10.943410983s to wait for pod list to return data ...
	I1204 23:13:50.969480  389201 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:13:50.972205  389201 default_sa.go:45] found service account: "default"
	I1204 23:13:50.972229  389201 default_sa.go:55] duration metric: took 2.740927ms for default service account to be created ...
	I1204 23:13:50.972237  389201 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:13:50.980831  389201 system_pods.go:86] 19 kube-system pods found
	I1204 23:13:50.980861  389201 system_pods.go:89] "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
	I1204 23:13:50.980867  389201 system_pods.go:89] "coredns-7c65d6cfc9-nvslc" [e12dda0f-2d10-4096-b12f-73bd871cc18e] Running
	I1204 23:13:50.980872  389201 system_pods.go:89] "csi-hostpath-attacher-0" [af4d7f93-4989-4c1d-8c89-43d0e74f1a44] Running
	I1204 23:13:50.980876  389201 system_pods.go:89] "csi-hostpath-resizer-0" [5198084f-6ce5-4b12-89f8-5d8a76057764] Running
	I1204 23:13:50.980880  389201 system_pods.go:89] "csi-hostpathplugin-97jlr" [1d17a273-85e7-4f77-9bbe-7786a88d0ebe] Running
	I1204 23:13:50.980883  389201 system_pods.go:89] "etcd-addons-630093" [7758ddc9-6dfb-4fe8-a37f-1ef8170cd720] Running
	I1204 23:13:50.980887  389201 system_pods.go:89] "kindnet-sklhp" [a2a719ef-fccf-456e-88ac-b6e5fad34e3e] Running
	I1204 23:13:50.980891  389201 system_pods.go:89] "kube-apiserver-addons-630093" [34402f18-4ebe-4e53-9495-549544e9f70c] Running
	I1204 23:13:50.980895  389201 system_pods.go:89] "kube-controller-manager-addons-630093" [e33f5809-04da-4fb0-8265-2e29e7f90e15] Running
	I1204 23:13:50.980899  389201 system_pods.go:89] "kube-ingress-dns-minikube" [4cda5680-90e6-43e2-b35f-bf0976f6fef3] Running
	I1204 23:13:50.980905  389201 system_pods.go:89] "kube-proxy-k9l4p" [bddbd74f-1a8f-4181-b2f7-decc74059f10] Running
	I1204 23:13:50.980910  389201 system_pods.go:89] "kube-scheduler-addons-630093" [1f496311-6985-4c79-a19a-4ceade68e41e] Running
	I1204 23:13:50.980914  389201 system_pods.go:89] "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
	I1204 23:13:50.980920  389201 system_pods.go:89] "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
	I1204 23:13:50.980926  389201 system_pods.go:89] "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
	I1204 23:13:50.980929  389201 system_pods.go:89] "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
	I1204 23:13:50.980933  389201 system_pods.go:89] "snapshot-controller-56fcc65765-2492d" [a604be0a-c061-4a65-9d32-0b98fff12222] Running
	I1204 23:13:50.980939  389201 system_pods.go:89] "snapshot-controller-56fcc65765-xtclh" [845fd71c-634d-41e2-a101-08a0c1458418] Running
	I1204 23:13:50.980943  389201 system_pods.go:89] "storage-provisioner" [cde6de53-e600-4898-a1c3-df78f4d4e6ff] Running
	I1204 23:13:50.980952  389201 system_pods.go:126] duration metric: took 8.709075ms to wait for k8s-apps to be running ...
	I1204 23:13:50.980961  389201 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:13:50.981009  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:13:50.992805  389201 system_svc.go:56] duration metric: took 11.832695ms WaitForService to wait for kubelet
	I1204 23:13:50.992839  389201 kubeadm.go:582] duration metric: took 2m12.65796392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:13:50.992860  389201 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:13:50.996391  389201 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1204 23:13:50.996430  389201 node_conditions.go:123] node cpu capacity is 8
	I1204 23:13:50.996447  389201 node_conditions.go:105] duration metric: took 3.580009ms to run NodePressure ...
	I1204 23:13:50.996463  389201 start.go:241] waiting for startup goroutines ...
	I1204 23:13:50.996483  389201 start.go:246] waiting for cluster config update ...
	I1204 23:13:50.996508  389201 start.go:255] writing updated cluster config ...
	I1204 23:13:50.996891  389201 ssh_runner.go:195] Run: rm -f paused
	I1204 23:13:51.048677  389201 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 23:13:51.051940  389201 out.go:177] * Done! kubectl is now configured to use "addons-630093" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 23:20:10 addons-630093 crio[1031]: time="2024-12-04 23:20:10.334498213Z" level=info msg="Got pod network &{Name:metrics-server-84c5f94fbc-vtkhx Namespace:kube-system ID:483727d0ea1ad5150122a589bbbe38581ad76cc8d0abb4c2bd96cc2f69324c02 UID:cec44a14-191c-4123-b802-68a2c04f883d NetNS:/var/run/netns/650fe825-da4c-4695-bbcc-c50740ad3e10 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 04 23:20:10 addons-630093 crio[1031]: time="2024-12-04 23:20:10.334681343Z" level=info msg="Deleting pod kube-system_metrics-server-84c5f94fbc-vtkhx from CNI network \"kindnet\" (type=ptp)"
	Dec 04 23:20:10 addons-630093 crio[1031]: time="2024-12-04 23:20:10.368269238Z" level=info msg="Stopped pod sandbox: 483727d0ea1ad5150122a589bbbe38581ad76cc8d0abb4c2bd96cc2f69324c02" id=470706c6-0a1e-4069-878a-9eded12221b5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 04 23:20:10 addons-630093 crio[1031]: time="2024-12-04 23:20:10.815111703Z" level=info msg="Removing container: 4bde5393ab67314863e12f398bcbb31ed62dfe2000b785c5245390fa674d301c" id=72376f99-e137-4844-a8c6-dd3675e0a471 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 04 23:20:10 addons-630093 crio[1031]: time="2024-12-04 23:20:10.833244764Z" level=info msg="Removed container 4bde5393ab67314863e12f398bcbb31ed62dfe2000b785c5245390fa674d301c: kube-system/metrics-server-84c5f94fbc-vtkhx/metrics-server" id=72376f99-e137-4844-a8c6-dd3675e0a471 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 04 23:20:11 addons-630093 crio[1031]: time="2024-12-04 23:20:11.810913199Z" level=info msg="Checking image status: docker.io/nginx:latest" id=0b2201f8-d8ad-45cd-b8b0-b5fa2ea56ba1 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:20:11 addons-630093 crio[1031]: time="2024-12-04 23:20:11.811294933Z" level=info msg="Image docker.io/nginx:latest not found" id=0b2201f8-d8ad-45cd-b8b0-b5fa2ea56ba1 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:20:15 addons-630093 crio[1031]: time="2024-12-04 23:20:15.754880972Z" level=info msg="Stopping container: 3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf (timeout: 30s)" id=26986a97-aefa-42ba-aeb4-6d9a341cf708 name=/runtime.v1.RuntimeService/StopContainer
	Dec 04 23:20:15 addons-630093 conmon[3752]: conmon 3c19424241254aa7b251 <ninfo>: container 3764 exited with status 2
	Dec 04 23:20:15 addons-630093 crio[1031]: time="2024-12-04 23:20:15.887985524Z" level=info msg="Stopped container 3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf: default/cloud-spanner-emulator-dc5db94f4-qb868/cloud-spanner-emulator" id=26986a97-aefa-42ba-aeb4-6d9a341cf708 name=/runtime.v1.RuntimeService/StopContainer
	Dec 04 23:20:15 addons-630093 crio[1031]: time="2024-12-04 23:20:15.888553110Z" level=info msg="Stopping pod sandbox: 7e0131b1c64fcff32c4715768c2c6ae69159c96f5d60be32f2a2eab4ae24adf4" id=8272ca48-d0af-49d2-9148-014ed26c84d4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 04 23:20:15 addons-630093 crio[1031]: time="2024-12-04 23:20:15.888779558Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-dc5db94f4-qb868 Namespace:default ID:7e0131b1c64fcff32c4715768c2c6ae69159c96f5d60be32f2a2eab4ae24adf4 UID:bd2ee58a-86d6-4981-ab81-15c06c700604 NetNS:/var/run/netns/47bc83d2-765a-4a02-b0ef-39277ed33817 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 04 23:20:15 addons-630093 crio[1031]: time="2024-12-04 23:20:15.888903685Z" level=info msg="Deleting pod default_cloud-spanner-emulator-dc5db94f4-qb868 from CNI network \"kindnet\" (type=ptp)"
	Dec 04 23:20:15 addons-630093 crio[1031]: time="2024-12-04 23:20:15.928194386Z" level=info msg="Stopped pod sandbox: 7e0131b1c64fcff32c4715768c2c6ae69159c96f5d60be32f2a2eab4ae24adf4" id=8272ca48-d0af-49d2-9148-014ed26c84d4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 04 23:20:16 addons-630093 crio[1031]: time="2024-12-04 23:20:16.831025582Z" level=info msg="Removing container: 3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf" id=f3160dd4-a09a-4f55-b840-b68eb9e20c48 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 04 23:20:16 addons-630093 crio[1031]: time="2024-12-04 23:20:16.845184164Z" level=info msg="Removed container 3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf: default/cloud-spanner-emulator-dc5db94f4-qb868/cloud-spanner-emulator" id=f3160dd4-a09a-4f55-b840-b68eb9e20c48 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 04 23:20:18 addons-630093 crio[1031]: time="2024-12-04 23:20:18.034513244Z" level=warning msg="Stopping container c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7 with stop signal timed out: timeout reached after 30 seconds waiting for container process to exit" id=5603a7b7-7d34-4b2f-aa23-6b47317be399 name=/runtime.v1.RuntimeService/StopContainer
	Dec 04 23:20:18 addons-630093 conmon[4560]: conmon c0b9ea5a54fce6f7ab00 <ninfo>: container 4572 exited with status 137
	Dec 04 23:20:18 addons-630093 crio[1031]: time="2024-12-04 23:20:18.168419403Z" level=info msg="Stopped container c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7: local-path-storage/local-path-provisioner-86d989889c-zjwsn/local-path-provisioner" id=5603a7b7-7d34-4b2f-aa23-6b47317be399 name=/runtime.v1.RuntimeService/StopContainer
	Dec 04 23:20:18 addons-630093 crio[1031]: time="2024-12-04 23:20:18.169031795Z" level=info msg="Stopping pod sandbox: 2a9f5fb1eead66e284a1d8ee2729eb10744c5dd73fd6a75f4ffba0818cde6270" id=3a74ac69-6553-48c9-845d-e0464bdf0634 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 04 23:20:18 addons-630093 crio[1031]: time="2024-12-04 23:20:18.169278876Z" level=info msg="Got pod network &{Name:local-path-provisioner-86d989889c-zjwsn Namespace:local-path-storage ID:2a9f5fb1eead66e284a1d8ee2729eb10744c5dd73fd6a75f4ffba0818cde6270 UID:23620bc7-9fcd-468c-a015-3fe5cc10c3b0 NetNS:/var/run/netns/27382405-f80b-4301-b065-eb8976308362 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 04 23:20:18 addons-630093 crio[1031]: time="2024-12-04 23:20:18.169395222Z" level=info msg="Deleting pod local-path-storage_local-path-provisioner-86d989889c-zjwsn from CNI network \"kindnet\" (type=ptp)"
	Dec 04 23:20:18 addons-630093 crio[1031]: time="2024-12-04 23:20:18.204228477Z" level=info msg="Stopped pod sandbox: 2a9f5fb1eead66e284a1d8ee2729eb10744c5dd73fd6a75f4ffba0818cde6270" id=3a74ac69-6553-48c9-845d-e0464bdf0634 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 04 23:20:18 addons-630093 crio[1031]: time="2024-12-04 23:20:18.838010656Z" level=info msg="Removing container: c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7" id=6f35dec6-37aa-4951-a48f-a2cc9a7c8bbb name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 04 23:20:18 addons-630093 crio[1031]: time="2024-12-04 23:20:18.851760629Z" level=info msg="Removed container c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7: local-path-storage/local-path-provisioner-86d989889c-zjwsn/local-path-provisioner" id=6f35dec6-37aa-4951-a48f-a2cc9a7c8bbb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a92f917845840       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          6 minutes ago       Running             busybox                                  0                   9101d3097d84d       busybox
	19a975e308aa0       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b                             7 minutes ago       Running             controller                               0                   f7e4db205d4a2       ingress-nginx-controller-5f85ff4588-bjrmz
	153039955b8e9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          7 minutes ago       Running             csi-snapshotter                          0                   75bf3104e4902       csi-hostpathplugin-97jlr
	86a86137e5e1a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          7 minutes ago       Running             csi-provisioner                          0                   75bf3104e4902       csi-hostpathplugin-97jlr
	722cda2e61fdf       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   75bf3104e4902       csi-hostpathplugin-97jlr
	520228ead6e81       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   75bf3104e4902       csi-hostpathplugin-97jlr
	904410f83eb89       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   75bf3104e4902       csi-hostpathplugin-97jlr
	d43b4e626d869       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              patch                                    0                   1453371ecba6e       ingress-nginx-admission-patch-6klmq
	9cfd8f1d1fc9d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              create                                   0                   6a2e4839790d0       ingress-nginx-admission-create-g9mgr
	31862be06ca2f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   8 minutes ago       Running             csi-external-health-monitor-controller   0                   75bf3104e4902       csi-hostpathplugin-97jlr
	c3bf77a4a88bb       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   6be372042ec01       snapshot-controller-56fcc65765-xtclh
	ad2a02af7805b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      8 minutes ago       Running             volume-snapshot-controller               0                   ed2dd407b0f06       snapshot-controller-56fcc65765-2492d
	34d29b45443cc       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             8 minutes ago       Running             minikube-ingress-dns                     0                   fe05a9e0f9e54       kube-ingress-dns-minikube
	facaa7e1e233d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             8 minutes ago       Running             csi-attacher                             0                   5c82f2a4a9fdc       csi-hostpath-attacher-0
	86ba1534808a8       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              8 minutes ago       Running             csi-resizer                              0                   0e397ea764d0c       csi-hostpath-resizer-0
	1c628d0404971       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             8 minutes ago       Running             coredns                                  0                   e5a18048ffd94       coredns-7c65d6cfc9-nvslc
	7579ef8738441       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             8 minutes ago       Running             storage-provisioner                      0                   53117b6914cba       storage-provisioner
	f0e1e1197d418       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                                           8 minutes ago       Running             kindnet-cni                              0                   8e1077c9b19f2       kindnet-sklhp
	76b8a8033f246       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                                             8 minutes ago       Running             kube-proxy                               0                   7b72d950d834d       kube-proxy-k9l4p
	f25ca8d234e67       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                                             8 minutes ago       Running             kube-scheduler                           0                   6ecfaa8cbb0a8       kube-scheduler-addons-630093
	697a8666b9beb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                                             8 minutes ago       Running             kube-apiserver                           0                   c5cc52570c5da       kube-apiserver-addons-630093
	249b17c70ce14       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             8 minutes ago       Running             etcd                                     0                   5c544b67b37e6       etcd-addons-630093
	c18ad7ba7d7db       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                                             8 minutes ago       Running             kube-controller-manager                  0                   2b2d046f58c6b       kube-controller-manager-addons-630093
	
	
	==> coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] <==
	[INFO] 10.244.0.13:36200 - 58124 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101425s
	[INFO] 10.244.0.13:43691 - 63611 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005338233s
	[INFO] 10.244.0.13:43691 - 63271 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005381209s
	[INFO] 10.244.0.13:44344 - 26272 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005410445s
	[INFO] 10.244.0.13:44344 - 26005 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006018948s
	[INFO] 10.244.0.13:60838 - 12332 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005880377s
	[INFO] 10.244.0.13:60838 - 12579 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006174676s
	[INFO] 10.244.0.13:53538 - 12345 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091701s
	[INFO] 10.244.0.13:53538 - 12144 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126528s
	[INFO] 10.244.0.21:59547 - 34898 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213243s
	[INFO] 10.244.0.21:42413 - 63992 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314574s
	[INFO] 10.244.0.21:50534 - 50228 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001818s
	[INFO] 10.244.0.21:44438 - 35236 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136337s
	[INFO] 10.244.0.21:49334 - 10258 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138449s
	[INFO] 10.244.0.21:53611 - 11525 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012321s
	[INFO] 10.244.0.21:33638 - 34118 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007323199s
	[INFO] 10.244.0.21:43427 - 30051 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007940861s
	[INFO] 10.244.0.21:43377 - 12238 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008381865s
	[INFO] 10.244.0.21:40602 - 12057 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.009350731s
	[INFO] 10.244.0.21:47148 - 45016 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007185414s
	[INFO] 10.244.0.21:42834 - 25970 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007493941s
	[INFO] 10.244.0.21:44226 - 13563 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001030468s
	[INFO] 10.244.0.21:36544 - 7675 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001087253s
	[INFO] 10.244.0.25:33322 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238152s
	[INFO] 10.244.0.25:43627 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014501s
	
	
	==> describe nodes <==
	Name:               addons-630093
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-630093
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=addons-630093
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_11_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-630093
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-630093"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:11:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-630093
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:20:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-630093
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8258e1e2133c40cebfa95f57ba32eee3
	  System UUID:                bf67fca3-467d-49b0-b09d-7f56669f6671
	  Boot ID:                    ac1c7763-4d61-4ba9-8c16-bcbc5ed122b3
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m33s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  default                     task-pv-pod                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-bjrmz    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m40s
	  kube-system                 coredns-7c65d6cfc9-nvslc                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m46s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m38s
	  kube-system                 csi-hostpathplugin-97jlr                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m27s
	  kube-system                 etcd-addons-630093                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m52s
	  kube-system                 kindnet-sklhp                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m46s
	  kube-system                 kube-apiserver-addons-630093                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m52s
	  kube-system                 kube-controller-manager-addons-630093        200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m52s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	  kube-system                 kube-proxy-k9l4p                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m46s
	  kube-system                 kube-scheduler-addons-630093                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m52s
	  kube-system                 snapshot-controller-56fcc65765-2492d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 snapshot-controller-56fcc65765-xtclh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m41s                  kube-proxy       
	  Normal   Starting                 8m57s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m57s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m57s (x8 over 8m57s)  kubelet          Node addons-630093 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m57s (x8 over 8m57s)  kubelet          Node addons-630093 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m57s (x7 over 8m57s)  kubelet          Node addons-630093 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m52s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m51s                  kubelet          Node addons-630093 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m51s                  kubelet          Node addons-630093 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m51s                  kubelet          Node addons-630093 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m47s                  node-controller  Node addons-630093 event: Registered Node addons-630093 in Controller
	  Normal   NodeReady                8m27s                  kubelet          Node addons-630093 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[Dec 4 22:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 d8 34 c4 9e fd 08 06
	[  +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[ +35.699001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[Dec 4 22:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 3d b0 9a 20 99 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[  +1.225322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000021] ll header: 00000000: ff ff ff ff ff ff b2 70 f6 e4 04 7e 08 06
	[  +0.023795] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	[  +8.010933] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +18.260065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e b7 56 b9 28 5b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +24.579912] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ca b1 23 b4 91 08 06
	[  +0.000531] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	
	
	==> etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] <==
	{"level":"info","ts":"2024-12-04T23:11:40.217773Z","caller":"traceutil/trace.go:171","msg":"trace[1405136476] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-630093; range_end:; response_count:1; response_revision:392; }","duration":"108.112329ms","start":"2024-12-04T23:11:40.109647Z","end":"2024-12-04T23:11:40.217759Z","steps":["trace[1405136476] 'agreement among raft nodes before linearized reading'  (duration: 103.402111ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.605094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.675544ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-04T23:11:40.605257Z","caller":"traceutil/trace.go:171","msg":"trace[803689926] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:398; }","duration":"198.852168ms","start":"2024-12-04T23:11:40.406387Z","end":"2024-12-04T23:11:40.605239Z","steps":["trace[803689926] 'range keys from in-memory index tree'  (duration: 194.382666ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.708502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.336878ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033691115604618 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" value_size:3622 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-04T23:11:40.895257Z","caller":"traceutil/trace.go:171","msg":"trace[1109807764] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"279.117548ms","start":"2024-12-04T23:11:40.616120Z","end":"2024-12-04T23:11:40.895238Z","steps":["trace[1109807764] 'process raft request'  (duration: 279.078288ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:40.895484Z","caller":"traceutil/trace.go:171","msg":"trace[215470366] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"387.51899ms","start":"2024-12-04T23:11:40.507954Z","end":"2024-12-04T23:11:40.895473Z","steps":["trace[215470366] 'process raft request'  (duration: 96.858883ms)","trace[215470366] 'compare'  (duration: 103.229726ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T23:11:40.895555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T23:11:40.507931Z","time spent":"387.575868ms","remote":"127.0.0.1:59108","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3684,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" value_size:3622 >> failure:<>"}
	{"level":"info","ts":"2024-12-04T23:11:40.895855Z","caller":"traceutil/trace.go:171","msg":"trace[2076159084] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"288.040682ms","start":"2024-12-04T23:11:40.607803Z","end":"2024-12-04T23:11:40.895844Z","steps":["trace[2076159084] 'process raft request'  (duration: 287.297204ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:40.895959Z","caller":"traceutil/trace.go:171","msg":"trace[705242873] linearizableReadLoop","detail":"{readStateIndex:410; appliedIndex:408; }","duration":"280.349916ms","start":"2024-12-04T23:11:40.615601Z","end":"2024-12-04T23:11:40.895951Z","steps":["trace[705242873] 'read index received'  (duration: 83.684619ms)","trace[705242873] 'applied index is now lower than readState.Index'  (duration: 196.664648ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T23:11:40.896113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.608929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-630093\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-12-04T23:11:40.896138Z","caller":"traceutil/trace.go:171","msg":"trace[1318972100] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-630093; range_end:; response_count:1; response_revision:401; }","duration":"280.640123ms","start":"2024-12-04T23:11:40.615490Z","end":"2024-12-04T23:11:40.896130Z","steps":["trace[1318972100] 'agreement among raft nodes before linearized reading'  (duration: 280.572794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.896264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.36641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:40.896282Z","caller":"traceutil/trace.go:171","msg":"trace[697950005] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:401; }","duration":"280.385448ms","start":"2024-12-04T23:11:40.615891Z","end":"2024-12-04T23:11:40.896276Z","steps":["trace[697950005] 'agreement among raft nodes before linearized reading'  (duration: 280.354047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:41.603321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.477454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:41.603924Z","caller":"traceutil/trace.go:171","msg":"trace[1769666947] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:419; }","duration":"106.090798ms","start":"2024-12-04T23:11:41.497809Z","end":"2024-12-04T23:11:41.603899Z","steps":["trace[1769666947] 'agreement among raft nodes before linearized reading'  (duration: 105.439451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:41.603524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.607937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-630093\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-12-04T23:11:41.604378Z","caller":"traceutil/trace.go:171","msg":"trace[1429916583] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-630093; range_end:; response_count:1; response_revision:419; }","duration":"101.463597ms","start":"2024-12-04T23:11:41.502900Z","end":"2024-12-04T23:11:41.604364Z","steps":["trace[1429916583] 'agreement among raft nodes before linearized reading'  (duration: 100.553991ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:42.012812Z","caller":"traceutil/trace.go:171","msg":"trace[1073586070] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"101.602813ms","start":"2024-12-04T23:11:41.911189Z","end":"2024-12-04T23:11:42.012792Z","steps":["trace[1073586070] 'process raft request'  (duration: 87.210063ms)","trace[1073586070] 'compare'  (duration: 13.942562ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-04T23:11:42.012996Z","caller":"traceutil/trace.go:171","msg":"trace[73910532] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"101.658352ms","start":"2024-12-04T23:11:41.911329Z","end":"2024-12-04T23:11:42.012987Z","steps":["trace[73910532] 'process raft request'  (duration: 101.143669ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:42.013256Z","caller":"traceutil/trace.go:171","msg":"trace[1994636355] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"101.69878ms","start":"2024-12-04T23:11:41.911547Z","end":"2024-12-04T23:11:42.013245Z","steps":["trace[1994636355] 'process raft request'  (duration: 100.967611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:42.096651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.399561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:42.096715Z","caller":"traceutil/trace.go:171","msg":"trace[1209668564] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:440; }","duration":"178.473778ms","start":"2024-12-04T23:11:41.918228Z","end":"2024-12-04T23:11:42.096702Z","steps":["trace[1209668564] 'agreement among raft nodes before linearized reading'  (duration: 178.384048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:42.097064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.915985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:42.099886Z","caller":"traceutil/trace.go:171","msg":"trace[231438469] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:440; }","duration":"181.736324ms","start":"2024-12-04T23:11:41.918132Z","end":"2024-12-04T23:11:42.099868Z","steps":["trace[231438469] 'agreement among raft nodes before linearized reading'  (duration: 178.596552ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:44.318424Z","caller":"traceutil/trace.go:171","msg":"trace[299548537] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"105.793664ms","start":"2024-12-04T23:11:44.212613Z","end":"2024-12-04T23:11:44.318407Z","steps":["trace[299548537] 'process raft request'  (duration: 103.084576ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:20:24 up  2:02,  0 users,  load average: 0.27, 0.47, 0.78
	Linux addons-630093 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] <==
	I1204 23:18:17.398775       1 main.go:301] handling current node
	I1204 23:18:27.395698       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:27.395786       1 main.go:301] handling current node
	I1204 23:18:37.402744       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:37.402787       1 main.go:301] handling current node
	I1204 23:18:47.396592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:47.396635       1 main.go:301] handling current node
	I1204 23:18:57.395818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:57.395863       1 main.go:301] handling current node
	I1204 23:19:07.397501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:07.397546       1 main.go:301] handling current node
	I1204 23:19:17.398712       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:17.398746       1 main.go:301] handling current node
	I1204 23:19:27.398720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:27.398771       1 main.go:301] handling current node
	I1204 23:19:37.402734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:37.402778       1 main.go:301] handling current node
	I1204 23:19:47.395778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:47.395820       1 main.go:301] handling current node
	I1204 23:19:57.395656       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:57.395708       1 main.go:301] handling current node
	I1204 23:20:07.395877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:20:07.395937       1 main.go:301] handling current node
	I1204 23:20:17.395735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:20:17.395791       1 main.go:301] handling current node
	
	
	==> kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] <==
	W1204 23:12:44.501182       1 handler_proxy.go:99] no RequestInfo found in the context
	W1204 23:12:44.501182       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 23:12:44.501270       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1204 23:12:44.501295       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 23:12:44.502403       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 23:12:44.502426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 23:13:18.020994       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 23:13:18.021061       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.81.204:443: connect: connection refused" logger="UnhandledError"
	E1204 23:13:18.021072       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1204 23:13:18.022591       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.81.204:443: connect: connection refused" logger="UnhandledError"
	I1204 23:13:18.053200       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1204 23:13:59.747428       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54842: use of closed network connection
	E1204 23:13:59.921107       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54876: use of closed network connection
	I1204 23:14:08.946781       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.65.33"}
	I1204 23:14:25.954565       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1204 23:14:26.167940       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.235.196"}
	I1204 23:14:28.188596       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1204 23:14:29.205715       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1204 23:20:19.050910       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	
	
	==> kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] <==
	E1204 23:15:15.332023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:15:41.664844       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:15:41.664897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:16:29.575804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:16:29.575854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:17:02.559821       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:17:02.559870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:17:45.806997       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:17:45.807050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:18:26.298216       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:18:26.298264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:19:04.552124       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:19:04.552173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1204 23:19:41.406992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-630093"
	I1204 23:19:48.019395       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="12.981µs"
	E1204 23:19:52.162747       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W1204 23:19:52.414535       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:19:52.414579       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E1204 23:20:07.163168       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	I1204 23:20:09.151080       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="7.588µs"
	I1204 23:20:15.744600       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="5.216µs"
	E1204 23:20:22.163860       1 pv_controller.go:1586] "Error finding provisioning plugin for claim" err="storageclass.storage.k8s.io \"local-path\" not found" logger="persistentvolume-binder-controller" PVC="default/test-pvc"
	W1204 23:20:24.405620       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:20:24.405678       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1204 23:20:24.578762       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	
	
	==> kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] <==
	I1204 23:11:41.999798       1 server_linux.go:66] "Using iptables proxy"
	I1204 23:11:42.522412       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1204 23:11:42.522510       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:11:42.915799       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1204 23:11:42.916905       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:11:42.999168       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:11:42.999868       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:11:42.999987       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:11:43.001630       1 config.go:199] "Starting service config controller"
	I1204 23:11:43.002952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:11:43.002663       1 config.go:328] "Starting node config controller"
	I1204 23:11:43.003244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:11:43.002141       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:11:43.003442       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:11:43.105483       1 shared_informer.go:320] Caches are synced for node config
	I1204 23:11:43.105660       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:11:43.105772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] <==
	W1204 23:11:30.518306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1204 23:11:30.518308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:11:30.518319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1204 23:11:30.518324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:30.518387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:11:30.518406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.464973       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:11:31.465022       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 23:11:31.504488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.504541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.546483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.546559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.565052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:11:31.565112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.572602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 23:11:31.572647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.606116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 23:11:31.606166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.628789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 23:11:31.628843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.663323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.663367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.685908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:11:31.685980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 23:11:33.616392       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 23:20:11 addons-630093 kubelet[1643]: E1204 23:20:11.811627    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:20:12 addons-630093 kubelet[1643]: I1204 23:20:12.811849    1643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec44a14-191c-4123-b802-68a2c04f883d" path="/var/lib/kubelet/pods/cec44a14-191c-4123-b802-68a2c04f883d/volumes"
	Dec 04 23:20:13 addons-630093 kubelet[1643]: E1204 23:20:13.019356    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354413019083095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:20:13 addons-630093 kubelet[1643]: E1204 23:20:13.019398    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354413019083095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:20:16 addons-630093 kubelet[1643]: I1204 23:20:16.004164    1643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqg2s\" (UniqueName: \"kubernetes.io/projected/bd2ee58a-86d6-4981-ab81-15c06c700604-kube-api-access-hqg2s\") pod \"bd2ee58a-86d6-4981-ab81-15c06c700604\" (UID: \"bd2ee58a-86d6-4981-ab81-15c06c700604\") "
	Dec 04 23:20:16 addons-630093 kubelet[1643]: I1204 23:20:16.006350    1643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2ee58a-86d6-4981-ab81-15c06c700604-kube-api-access-hqg2s" (OuterVolumeSpecName: "kube-api-access-hqg2s") pod "bd2ee58a-86d6-4981-ab81-15c06c700604" (UID: "bd2ee58a-86d6-4981-ab81-15c06c700604"). InnerVolumeSpecName "kube-api-access-hqg2s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 04 23:20:16 addons-630093 kubelet[1643]: I1204 23:20:16.104988    1643 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-hqg2s\" (UniqueName: \"kubernetes.io/projected/bd2ee58a-86d6-4981-ab81-15c06c700604-kube-api-access-hqg2s\") on node \"addons-630093\" DevicePath \"\""
	Dec 04 23:20:16 addons-630093 kubelet[1643]: I1204 23:20:16.829838    1643 scope.go:117] "RemoveContainer" containerID="3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf"
	Dec 04 23:20:16 addons-630093 kubelet[1643]: I1204 23:20:16.845457    1643 scope.go:117] "RemoveContainer" containerID="3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf"
	Dec 04 23:20:16 addons-630093 kubelet[1643]: E1204 23:20:16.846020    1643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf\": container with ID starting with 3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf not found: ID does not exist" containerID="3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf"
	Dec 04 23:20:16 addons-630093 kubelet[1643]: I1204 23:20:16.846078    1643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf"} err="failed to get container status \"3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf\": rpc error: code = NotFound desc = could not find container \"3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf\": container with ID starting with 3c19424241254aa7b251454c93032375546f5aa0bb78359900dd507679edd6cf not found: ID does not exist"
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.318537    1643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23620bc7-9fcd-468c-a015-3fe5cc10c3b0-config-volume\") pod \"23620bc7-9fcd-468c-a015-3fe5cc10c3b0\" (UID: \"23620bc7-9fcd-468c-a015-3fe5cc10c3b0\") "
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.318609    1643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68bw4\" (UniqueName: \"kubernetes.io/projected/23620bc7-9fcd-468c-a015-3fe5cc10c3b0-kube-api-access-68bw4\") pod \"23620bc7-9fcd-468c-a015-3fe5cc10c3b0\" (UID: \"23620bc7-9fcd-468c-a015-3fe5cc10c3b0\") "
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.319142    1643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23620bc7-9fcd-468c-a015-3fe5cc10c3b0-config-volume" (OuterVolumeSpecName: "config-volume") pod "23620bc7-9fcd-468c-a015-3fe5cc10c3b0" (UID: "23620bc7-9fcd-468c-a015-3fe5cc10c3b0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.320641    1643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23620bc7-9fcd-468c-a015-3fe5cc10c3b0-kube-api-access-68bw4" (OuterVolumeSpecName: "kube-api-access-68bw4") pod "23620bc7-9fcd-468c-a015-3fe5cc10c3b0" (UID: "23620bc7-9fcd-468c-a015-3fe5cc10c3b0"). InnerVolumeSpecName "kube-api-access-68bw4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.419193    1643 reconciler_common.go:288] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/23620bc7-9fcd-468c-a015-3fe5cc10c3b0-config-volume\") on node \"addons-630093\" DevicePath \"\""
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.419229    1643 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-68bw4\" (UniqueName: \"kubernetes.io/projected/23620bc7-9fcd-468c-a015-3fe5cc10c3b0-kube-api-access-68bw4\") on node \"addons-630093\" DevicePath \"\""
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.811716    1643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2ee58a-86d6-4981-ab81-15c06c700604" path="/var/lib/kubelet/pods/bd2ee58a-86d6-4981-ab81-15c06c700604/volumes"
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.836880    1643 scope.go:117] "RemoveContainer" containerID="c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7"
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.852009    1643 scope.go:117] "RemoveContainer" containerID="c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7"
	Dec 04 23:20:18 addons-630093 kubelet[1643]: E1204 23:20:18.852387    1643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7\": container with ID starting with c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7 not found: ID does not exist" containerID="c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7"
	Dec 04 23:20:18 addons-630093 kubelet[1643]: I1204 23:20:18.852431    1643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7"} err="failed to get container status \"c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7\": rpc error: code = NotFound desc = could not find container \"c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7\": container with ID starting with c0b9ea5a54fce6f7ab008e85bf645783dfab5ad639c39d9edd23edb4365258d7 not found: ID does not exist"
	Dec 04 23:20:20 addons-630093 kubelet[1643]: I1204 23:20:20.811960    1643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23620bc7-9fcd-468c-a015-3fe5cc10c3b0" path="/var/lib/kubelet/pods/23620bc7-9fcd-468c-a015-3fe5cc10c3b0/volumes"
	Dec 04 23:20:23 addons-630093 kubelet[1643]: E1204 23:20:23.021629    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354423021373848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:20:23 addons-630093 kubelet[1643]: E1204 23:20:23.021662    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354423021373848,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [7579ef87384414e56ddfe0b7d9482bd87f3030a02185f51552230baf2942b017] <==
	I1204 23:11:58.350091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:11:58.357669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:11:58.357713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 23:11:58.365574       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 23:11:58.365696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7e65eeda-0a1f-4ed0-93d5-7510680ef7a9", APIVersion:"v1", ResourceVersion:"914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476 became leader
	I1204 23:11:58.365747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476!
	I1204 23:11:58.466731       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-630093 -n addons-630093
helpers_test.go:261: (dbg) Run:  kubectl --context addons-630093 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/CSI]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq: exit status 1 (82.961736ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-630093/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:14:26 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bg2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-49bg2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m59s                default-scheduler  Successfully assigned default/nginx to addons-630093
	  Warning  Failed     117s (x3 over 5m)    kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     117s (x3 over 5m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    77s (x5 over 5m)     kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     77s (x5 over 5m)     kubelet            Error: ImagePullBackOff
	  Normal   Pulling    66s (x4 over 5m59s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-630093/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:14:23 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbll2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-bbll2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  6m2s                 default-scheduler  Successfully assigned default/task-pv-pod to addons-630093
	  Warning  Failed     5m31s                kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    105s (x4 over 6m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     55s (x4 over 5m31s)  kubelet            Error: ErrImagePull
	  Warning  Failed     55s (x3 over 3m59s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    14s (x7 over 5m30s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     14s (x7 over 5m30s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jd9np (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jd9np:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g9mgr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6klmq" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.644409676s)
--- FAIL: TestAddons/parallel/CSI (379.17s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (334.48s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-630093 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-630093 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Non-zero exit: kubectl --context addons-630093 get pvc test-pvc -o jsonpath={.status.phase} -n default: context deadline exceeded (2.25µs)
helpers_test.go:396: TestAddons/parallel/LocalPath: WARNING: PVC get for "default" "test-pvc" returned: context deadline exceeded
addons_test.go:899: failed waiting for PVC test-pvc: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/LocalPath]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-630093
helpers_test.go:235: (dbg) docker inspect addons-630093:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8",
	        "Created": "2024-12-04T23:11:16.797897353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 389943,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-04T23:11:16.916347418Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/hosts",
	        "LogPath": "/var/lib/docker/containers/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8/172acc3450ade00044526824741e005120317f6d35ec317f851d2b6dc6d2a3b8-json.log",
	        "Name": "/addons-630093",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-630093:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-630093",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2-init/diff:/var/lib/docker/overlay2/e1057f3484b1ab78c06169089ecae0d5a5ffb4d6954d3cd93f0938b7adf18020/diff",
	                "MergedDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/469ba36a797e51b3c3ffcf32044a5cc7b1eaaf002213862a02e3a76a9b1fcfe2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-630093",
	                "Source": "/var/lib/docker/volumes/addons-630093/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-630093",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-630093",
	                "name.minikube.sigs.k8s.io": "addons-630093",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "38d3a3f6bb8d75ec22d0acfa9ec923dac8873b55e0bf68a977ec8a7eab9fc43d",
	            "SandboxKey": "/var/run/docker/netns/38d3a3f6bb8d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-630093": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a921fd89d48682e01ff03a455275f7258f4c5b5f271375ec1d96882eeae0da5a",
	                    "EndpointID": "1045d162f6b6ab28f4f633530bdbe7b45cc7c49fe1d735b103b4e8f31f8aba3e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-630093",
	                        "172acc3450ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-630093 -n addons-630093
helpers_test.go:244: <<< TestAddons/parallel/LocalPath FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/LocalPath]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 logs -n 25: (1.197945247s)
helpers_test.go:252: TestAddons/parallel/LocalPath logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-287298   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | -p download-only-287298              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-287298              | download-only-287298   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | -o=json --download-only              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | -p download-only-701357              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-701357              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-287298              | download-only-287298   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-701357              | download-only-701357   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | --download-only -p                   | download-docker-758817 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | download-docker-758817               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-758817            | download-docker-758817 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | --download-only -p                   | binary-mirror-223027   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | binary-mirror-223027                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:45271               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-223027              | binary-mirror-223027   | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| addons  | disable dashboard -p                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | addons-630093                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | addons-630093                        |                        |         |         |                     |                     |
	| start   | -p addons-630093 --wait=true         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:13 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:13 UTC | 04 Dec 24 23:13 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:13 UTC | 04 Dec 24 23:14 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | -p addons-630093                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | disable nvidia-device-plugin         |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | amd-gpu-device-plugin                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-630093 ip                     | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-630093 addons                 | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-630093 addons disable         | addons-630093          | jenkins | v1.34.0 | 04 Dec 24 23:14 UTC | 04 Dec 24 23:14 UTC |
	|         | yakd --alsologtostderr -v=1          |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:10:54
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:10:54.556147  389201 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:10:54.556275  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:54.556285  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:10:54.556289  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:54.556510  389201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:10:54.557204  389201 out.go:352] Setting JSON to false
	I1204 23:10:54.558202  389201 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6804,"bootTime":1733347051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:10:54.558281  389201 start.go:139] virtualization: kvm guest
	I1204 23:10:54.560449  389201 out.go:177] * [addons-630093] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:10:54.561800  389201 notify.go:220] Checking for updates...
	I1204 23:10:54.561821  389201 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:10:54.563229  389201 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:10:54.564678  389201 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:10:54.566233  389201 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:10:54.567553  389201 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:10:54.568781  389201 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:10:54.570554  389201 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:10:54.592245  389201 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:10:54.592340  389201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:54.635748  389201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:54.62674737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:54.635854  389201 docker.go:318] overlay module found
	I1204 23:10:54.637780  389201 out.go:177] * Using the docker driver based on user configuration
	I1204 23:10:54.639298  389201 start.go:297] selected driver: docker
	I1204 23:10:54.639319  389201 start.go:901] validating driver "docker" against <nil>
	I1204 23:10:54.639333  389201 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:10:54.640090  389201 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:54.684497  389201 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-04 23:10:54.676209306 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:54.684673  389201 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:10:54.684915  389201 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:10:54.686872  389201 out.go:177] * Using Docker driver with root privileges
	I1204 23:10:54.688173  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:10:54.688255  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:10:54.688267  389201 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:10:54.688343  389201 start.go:340] cluster config:
	{Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:10:54.689848  389201 out.go:177] * Starting "addons-630093" primary control-plane node in "addons-630093" cluster
	I1204 23:10:54.691334  389201 cache.go:121] Beginning downloading kic base image for docker with crio
	I1204 23:10:54.692886  389201 out.go:177] * Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:10:54.694391  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:10:54.694445  389201 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:10:54.694446  389201 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:10:54.694486  389201 cache.go:56] Caching tarball of preloaded images
	I1204 23:10:54.694592  389201 preload.go:172] Found /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1204 23:10:54.694609  389201 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:10:54.695076  389201 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json ...
	I1204 23:10:54.695108  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json: {Name:mk972e12a39ea9a33ae63a1f9239f64d658df51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:10:54.710108  389201 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:54.710258  389201 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1204 23:10:54.710280  389201 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1204 23:10:54.710287  389201 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1204 23:10:54.710299  389201 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1204 23:10:54.710311  389201 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from local cache
	I1204 23:11:08.081763  389201 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 from cached tarball
	I1204 23:11:08.081807  389201 cache.go:194] Successfully downloaded all kic artifacts
	I1204 23:11:08.081860  389201 start.go:360] acquireMachinesLock for addons-630093: {Name:mk65aca0e5e36a044494f94ee0e0497ac2b0ebab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1204 23:11:08.081970  389201 start.go:364] duration metric: took 86.786µs to acquireMachinesLock for "addons-630093"
	I1204 23:11:08.081996  389201 start.go:93] Provisioning new machine with config: &{Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:08.082085  389201 start.go:125] createHost starting for "" (driver="docker")
	I1204 23:11:08.248667  389201 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1204 23:11:08.249041  389201 start.go:159] libmachine.API.Create for "addons-630093" (driver="docker")
	I1204 23:11:08.249091  389201 client.go:168] LocalClient.Create starting
	I1204 23:11:08.249258  389201 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem
	I1204 23:11:08.313688  389201 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem
	I1204 23:11:08.644970  389201 cli_runner.go:164] Run: docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1204 23:11:08.660700  389201 cli_runner.go:211] docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1204 23:11:08.660788  389201 network_create.go:284] running [docker network inspect addons-630093] to gather additional debugging logs...
	I1204 23:11:08.660826  389201 cli_runner.go:164] Run: docker network inspect addons-630093
	W1204 23:11:08.677347  389201 cli_runner.go:211] docker network inspect addons-630093 returned with exit code 1
	I1204 23:11:08.677402  389201 network_create.go:287] error running [docker network inspect addons-630093]: docker network inspect addons-630093: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-630093 not found
	I1204 23:11:08.677421  389201 network_create.go:289] output of [docker network inspect addons-630093]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-630093 not found
	
	** /stderr **
	I1204 23:11:08.677519  389201 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:11:08.695034  389201 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016ec7e0}
	I1204 23:11:08.695093  389201 network_create.go:124] attempt to create docker network addons-630093 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1204 23:11:08.695152  389201 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-630093 addons-630093
	I1204 23:11:08.969618  389201 network_create.go:108] docker network addons-630093 192.168.49.0/24 created
	I1204 23:11:08.969673  389201 kic.go:121] calculated static IP "192.168.49.2" for the "addons-630093" container
	I1204 23:11:08.969756  389201 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1204 23:11:08.986135  389201 cli_runner.go:164] Run: docker volume create addons-630093 --label name.minikube.sigs.k8s.io=addons-630093 --label created_by.minikube.sigs.k8s.io=true
	I1204 23:11:09.028135  389201 oci.go:103] Successfully created a docker volume addons-630093
	I1204 23:11:09.028233  389201 cli_runner.go:164] Run: docker run --rm --name addons-630093-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --entrypoint /usr/bin/test -v addons-630093:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib
	I1204 23:11:12.239841  389201 cli_runner.go:217] Completed: docker run --rm --name addons-630093-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --entrypoint /usr/bin/test -v addons-630093:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -d /var/lib: (3.211561235s)
	I1204 23:11:12.239873  389201 oci.go:107] Successfully prepared a docker volume addons-630093
	I1204 23:11:12.239893  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:12.239931  389201 kic.go:194] Starting extracting preloaded images to volume ...
	I1204 23:11:12.240003  389201 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-630093:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir
	I1204 23:11:16.734062  389201 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-630093:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 -I lz4 -xf /preloaded.tar -C /extractDir: (4.493971774s)
	I1204 23:11:16.734103  389201 kic.go:203] duration metric: took 4.49416848s to extract preloaded images to volume ...
	W1204 23:11:16.734242  389201 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1204 23:11:16.734340  389201 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1204 23:11:16.781802  389201 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-630093 --name addons-630093 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-630093 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-630093 --network addons-630093 --ip 192.168.49.2 --volume addons-630093:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615
	I1204 23:11:17.088338  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Running}}
	I1204 23:11:17.106885  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.125610  389201 cli_runner.go:164] Run: docker exec addons-630093 stat /var/lib/dpkg/alternatives/iptables
	I1204 23:11:17.168914  389201 oci.go:144] the created container "addons-630093" has a running status.
	I1204 23:11:17.168961  389201 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa...
	I1204 23:11:17.214837  389201 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1204 23:11:17.235866  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.253714  389201 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1204 23:11:17.253744  389201 kic_runner.go:114] Args: [docker exec --privileged addons-630093 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1204 23:11:17.295280  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:17.314090  389201 machine.go:93] provisionDockerMachine start ...
	I1204 23:11:17.314213  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:17.333326  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:17.333585  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:17.333604  389201 main.go:141] libmachine: About to run SSH command:
	hostname
	I1204 23:11:17.334344  389201 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53382->127.0.0.1:33140: read: connection reset by peer
	I1204 23:11:20.462359  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630093
	
	I1204 23:11:20.462394  389201 ubuntu.go:169] provisioning hostname "addons-630093"
	I1204 23:11:20.462459  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.480144  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:20.480382  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:20.480401  389201 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-630093 && echo "addons-630093" | sudo tee /etc/hostname
	I1204 23:11:20.617685  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-630093
	
	I1204 23:11:20.617755  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.634927  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:20.635110  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:20.635127  389201 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-630093' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-630093/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-630093' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1204 23:11:20.762943  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1204 23:11:20.762974  389201 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20045-381016/.minikube CaCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20045-381016/.minikube}
	I1204 23:11:20.763024  389201 ubuntu.go:177] setting up certificates
	I1204 23:11:20.763037  389201 provision.go:84] configureAuth start
	I1204 23:11:20.763097  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:20.780798  389201 provision.go:143] copyHostCerts
	I1204 23:11:20.780875  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/cert.pem (1123 bytes)
	I1204 23:11:20.780993  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/key.pem (1679 bytes)
	I1204 23:11:20.781063  389201 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20045-381016/.minikube/ca.pem (1082 bytes)
	I1204 23:11:20.781117  389201 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem org=jenkins.addons-630093 san=[127.0.0.1 192.168.49.2 addons-630093 localhost minikube]
	I1204 23:11:20.868299  389201 provision.go:177] copyRemoteCerts
	I1204 23:11:20.868362  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1204 23:11:20.868401  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:20.885888  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:20.979351  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1204 23:11:21.002115  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1204 23:11:21.025135  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1204 23:11:21.048097  389201 provision.go:87] duration metric: took 285.042631ms to configureAuth
	I1204 23:11:21.048133  389201 ubuntu.go:193] setting minikube options for container-runtime
	I1204 23:11:21.048329  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:21.048491  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.065589  389201 main.go:141] libmachine: Using SSH client type: native
	I1204 23:11:21.065803  389201 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866c40] 0x869920 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1204 23:11:21.065829  389201 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1204 23:11:21.286767  389201 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1204 23:11:21.286801  389201 machine.go:96] duration metric: took 3.972682372s to provisionDockerMachine
	I1204 23:11:21.286818  389201 client.go:171] duration metric: took 13.037716692s to LocalClient.Create
	I1204 23:11:21.286846  389201 start.go:167] duration metric: took 13.037808895s to libmachine.API.Create "addons-630093"
	I1204 23:11:21.286858  389201 start.go:293] postStartSetup for "addons-630093" (driver="docker")
	I1204 23:11:21.286873  389201 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1204 23:11:21.286987  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1204 23:11:21.287090  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.304282  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.395931  389201 ssh_runner.go:195] Run: cat /etc/os-release
	I1204 23:11:21.399160  389201 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1204 23:11:21.399199  389201 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1204 23:11:21.399213  389201 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1204 23:11:21.399225  389201 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1204 23:11:21.399238  389201 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-381016/.minikube/addons for local assets ...
	I1204 23:11:21.399311  389201 filesync.go:126] Scanning /home/jenkins/minikube-integration/20045-381016/.minikube/files for local assets ...
	I1204 23:11:21.399355  389201 start.go:296] duration metric: took 112.489476ms for postStartSetup
	I1204 23:11:21.399706  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:21.416048  389201 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/config.json ...
	I1204 23:11:21.416313  389201 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:11:21.416373  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.433021  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.523629  389201 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1204 23:11:21.527955  389201 start.go:128] duration metric: took 13.445851769s to createHost
	I1204 23:11:21.527994  389201 start.go:83] releasing machines lock for "addons-630093", held for 13.446010021s
	I1204 23:11:21.528078  389201 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-630093
	I1204 23:11:21.544604  389201 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1204 23:11:21.544635  389201 ssh_runner.go:195] Run: cat /version.json
	I1204 23:11:21.544698  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.544711  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:21.562063  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.563107  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:21.726911  389201 ssh_runner.go:195] Run: systemctl --version
	I1204 23:11:21.731218  389201 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1204 23:11:21.869255  389201 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1204 23:11:21.873644  389201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:11:21.892231  389201 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1204 23:11:21.892324  389201 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1204 23:11:21.918534  389201 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1204 23:11:21.918567  389201 start.go:495] detecting cgroup driver to use...
	I1204 23:11:21.918609  389201 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1204 23:11:21.918738  389201 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1204 23:11:21.932783  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1204 23:11:21.942996  389201 docker.go:217] disabling cri-docker service (if available) ...
	I1204 23:11:21.943047  389201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1204 23:11:21.955543  389201 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1204 23:11:21.968274  389201 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1204 23:11:22.038339  389201 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1204 23:11:22.105989  389201 docker.go:233] disabling docker service ...
	I1204 23:11:22.106057  389201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1204 23:11:22.125303  389201 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1204 23:11:22.136595  389201 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1204 23:11:22.222266  389201 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1204 23:11:22.302782  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1204 23:11:22.313850  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1204 23:11:22.329072  389201 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1204 23:11:22.329153  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.338774  389201 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1204 23:11:22.338845  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.348617  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.358293  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.368200  389201 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1204 23:11:22.377304  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.386913  389201 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.402803  389201 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1204 23:11:22.412320  389201 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1204 23:11:22.420685  389201 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1204 23:11:22.428658  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:22.500255  389201 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1204 23:11:22.610956  389201 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1204 23:11:22.611044  389201 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1204 23:11:22.614513  389201 start.go:563] Will wait 60s for crictl version
	I1204 23:11:22.614575  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:11:22.617917  389201 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1204 23:11:22.653283  389201 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1204 23:11:22.653370  389201 ssh_runner.go:195] Run: crio --version
	I1204 23:11:22.690618  389201 ssh_runner.go:195] Run: crio --version
	I1204 23:11:22.727703  389201 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1204 23:11:22.729320  389201 cli_runner.go:164] Run: docker network inspect addons-630093 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1204 23:11:22.746518  389201 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1204 23:11:22.750432  389201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:11:22.761195  389201 kubeadm.go:883] updating cluster {Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1204 23:11:22.761320  389201 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:11:22.761379  389201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:11:22.829323  389201 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:11:22.829348  389201 crio.go:433] Images already preloaded, skipping extraction
	I1204 23:11:22.829393  389201 ssh_runner.go:195] Run: sudo crictl images --output json
	I1204 23:11:22.862169  389201 crio.go:514] all images are preloaded for cri-o runtime.
	I1204 23:11:22.862194  389201 cache_images.go:84] Images are preloaded, skipping loading
	I1204 23:11:22.862203  389201 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1204 23:11:22.862323  389201 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-630093 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1204 23:11:22.862387  389201 ssh_runner.go:195] Run: crio config
	I1204 23:11:22.906710  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:11:22.906743  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:11:22.906760  389201 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1204 23:11:22.906791  389201 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-630093 NodeName:addons-630093 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1204 23:11:22.906954  389201 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-630093"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1204 23:11:22.907084  389201 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1204 23:11:22.916048  389201 binaries.go:44] Found k8s binaries, skipping transfer
	I1204 23:11:22.916128  389201 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1204 23:11:22.924791  389201 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1204 23:11:22.942166  389201 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1204 23:11:22.959356  389201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1204 23:11:22.976793  389201 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1204 23:11:22.980197  389201 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1204 23:11:22.990601  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:23.062561  389201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:11:23.075015  389201 certs.go:68] Setting up /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093 for IP: 192.168.49.2
	I1204 23:11:23.075040  389201 certs.go:194] generating shared ca certs ...
	I1204 23:11:23.075059  389201 certs.go:226] acquiring lock for ca certs: {Name:mk50fab2a60ec4c58718c6f0f51391a1fd27b49a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.075181  389201 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key
	I1204 23:11:23.204545  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt ...
	I1204 23:11:23.204578  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt: {Name:mkc915739630db1af592b52d8db74ccdd723c7d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.204795  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key ...
	I1204 23:11:23.204810  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key: {Name:mk98e76db05ffadd20917a2d52b7c5260ba39b61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.204916  389201 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key
	I1204 23:11:23.290846  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt ...
	I1204 23:11:23.290885  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt: {Name:mkde85a69cd8a6277fae67df41cc397c773bd1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.291129  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key ...
	I1204 23:11:23.291148  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key: {Name:mk4d177cf9edd13c7ad0b568d9086767e339e8d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.291277  389201 certs.go:256] generating profile certs ...
	I1204 23:11:23.291366  389201 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key
	I1204 23:11:23.291400  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt with IP's: []
	I1204 23:11:23.499855  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt ...
	I1204 23:11:23.499895  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: {Name:mk9311f602c7b1a2b44c19176448b2aa4b32b7c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.500105  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key ...
	I1204 23:11:23.500123  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.key: {Name:mk9ddfb2303f17ccf88a6e5b8c00cffba1cd1a53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.500223  389201 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548
	I1204 23:11:23.500249  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1204 23:11:23.788463  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 ...
	I1204 23:11:23.788500  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548: {Name:mk43ba65c92ad4331db8d9847c5ef32165302741 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.788694  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548 ...
	I1204 23:11:23.788714  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548: {Name:mkaced9e8196936ffe141d4dc3e6adda91a33533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:23.788818  389201 certs.go:381] copying /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt.8394f548 -> /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt
	I1204 23:11:23.788916  389201 certs.go:385] copying /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key.8394f548 -> /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key
	I1204 23:11:23.788997  389201 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key
	I1204 23:11:23.789023  389201 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt with IP's: []
	I1204 23:11:24.148068  389201 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt ...
	I1204 23:11:24.148104  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt: {Name:mk0ee13602067d1cc858c9534a9707d295b361ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:24.148309  389201 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key ...
	I1204 23:11:24.148327  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key: {Name:mk0ba88937bb7ca6e51a8cf0c8d2ef8507f0374f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:24.148532  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca-key.pem (1675 bytes)
	I1204 23:11:24.148585  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/ca.pem (1082 bytes)
	I1204 23:11:24.148628  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/cert.pem (1123 bytes)
	I1204 23:11:24.148673  389201 certs.go:484] found cert: /home/jenkins/minikube-integration/20045-381016/.minikube/certs/key.pem (1679 bytes)
	I1204 23:11:24.149367  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1204 23:11:24.173224  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1204 23:11:24.196229  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1204 23:11:24.219088  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1204 23:11:24.242335  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1204 23:11:24.265632  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1204 23:11:24.288555  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1204 23:11:24.311820  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1204 23:11:24.334208  389201 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1204 23:11:24.356395  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1204 23:11:24.373538  389201 ssh_runner.go:195] Run: openssl version
	I1204 23:11:24.378816  389201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1204 23:11:24.388861  389201 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.392560  389201 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  4 23:11 /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.392635  389201 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1204 23:11:24.399222  389201 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1204 23:11:24.408373  389201 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1204 23:11:24.411765  389201 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1204 23:11:24.411828  389201 kubeadm.go:392] StartCluster: {Name:addons-630093 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-630093 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:11:24.411930  389201 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1204 23:11:24.412006  389201 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1204 23:11:24.445620  389201 cri.go:89] found id: ""
	I1204 23:11:24.445692  389201 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1204 23:11:24.454281  389201 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1204 23:11:24.462658  389201 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1204 23:11:24.462715  389201 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1204 23:11:24.471058  389201 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1204 23:11:24.471082  389201 kubeadm.go:157] found existing configuration files:
	
	I1204 23:11:24.471133  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1204 23:11:24.479379  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1204 23:11:24.479446  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1204 23:11:24.488299  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1204 23:11:24.496565  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1204 23:11:24.496635  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1204 23:11:24.505412  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1204 23:11:24.514190  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1204 23:11:24.514243  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1204 23:11:24.522477  389201 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1204 23:11:24.531365  389201 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1204 23:11:24.531421  389201 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1204 23:11:24.539416  389201 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1204 23:11:24.592567  389201 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1071-gcp\n", err: exit status 1
	I1204 23:11:24.645179  389201 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1204 23:11:33.426336  389201 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1204 23:11:33.426437  389201 kubeadm.go:310] [preflight] Running pre-flight checks
	I1204 23:11:33.426522  389201 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1204 23:11:33.426572  389201 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1071-gcp
	I1204 23:11:33.426602  389201 kubeadm.go:310] OS: Linux
	I1204 23:11:33.426679  389201 kubeadm.go:310] CGROUPS_CPU: enabled
	I1204 23:11:33.426720  389201 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1204 23:11:33.426798  389201 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1204 23:11:33.426877  389201 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1204 23:11:33.426958  389201 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1204 23:11:33.427034  389201 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1204 23:11:33.427111  389201 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1204 23:11:33.427182  389201 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1204 23:11:33.427243  389201 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1204 23:11:33.427304  389201 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1204 23:11:33.427436  389201 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1204 23:11:33.427575  389201 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1204 23:11:33.427676  389201 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1204 23:11:33.429670  389201 out.go:235]   - Generating certificates and keys ...
	I1204 23:11:33.429776  389201 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1204 23:11:33.429879  389201 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1204 23:11:33.429944  389201 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1204 23:11:33.429996  389201 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1204 23:11:33.430058  389201 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1204 23:11:33.430106  389201 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1204 23:11:33.430157  389201 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1204 23:11:33.430253  389201 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-630093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1204 23:11:33.430323  389201 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1204 23:11:33.430455  389201 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-630093 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1204 23:11:33.430550  389201 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1204 23:11:33.430624  389201 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1204 23:11:33.430694  389201 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1204 23:11:33.430742  389201 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1204 23:11:33.430787  389201 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1204 23:11:33.430873  389201 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1204 23:11:33.430954  389201 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1204 23:11:33.431013  389201 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1204 23:11:33.431063  389201 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1204 23:11:33.431131  389201 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1204 23:11:33.431189  389201 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1204 23:11:33.432586  389201 out.go:235]   - Booting up control plane ...
	I1204 23:11:33.432667  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1204 23:11:33.432728  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1204 23:11:33.432786  389201 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1204 23:11:33.432889  389201 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1204 23:11:33.433004  389201 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1204 23:11:33.433088  389201 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1204 23:11:33.433245  389201 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1204 23:11:33.433395  389201 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1204 23:11:33.433490  389201 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.66305ms
	I1204 23:11:33.433586  389201 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1204 23:11:33.433659  389201 kubeadm.go:310] [api-check] The API server is healthy after 4.001728957s
	I1204 23:11:33.433784  389201 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1204 23:11:33.433892  389201 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1204 23:11:33.433961  389201 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1204 23:11:33.434106  389201 kubeadm.go:310] [mark-control-plane] Marking the node addons-630093 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1204 23:11:33.434165  389201 kubeadm.go:310] [bootstrap-token] Using token: 6qxarj.88k5pjf3ytyfzen4
	I1204 23:11:33.435845  389201 out.go:235]   - Configuring RBAC rules ...
	I1204 23:11:33.435945  389201 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1204 23:11:33.436019  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1204 23:11:33.436136  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1204 23:11:33.436246  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1204 23:11:33.436351  389201 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1204 23:11:33.436423  389201 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1204 23:11:33.436515  389201 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1204 23:11:33.436552  389201 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1204 23:11:33.436626  389201 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1204 23:11:33.436642  389201 kubeadm.go:310] 
	I1204 23:11:33.436722  389201 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1204 23:11:33.436737  389201 kubeadm.go:310] 
	I1204 23:11:33.436836  389201 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1204 23:11:33.436844  389201 kubeadm.go:310] 
	I1204 23:11:33.436864  389201 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1204 23:11:33.436913  389201 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1204 23:11:33.436961  389201 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1204 23:11:33.436967  389201 kubeadm.go:310] 
	I1204 23:11:33.437008  389201 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1204 23:11:33.437016  389201 kubeadm.go:310] 
	I1204 23:11:33.437056  389201 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1204 23:11:33.437062  389201 kubeadm.go:310] 
	I1204 23:11:33.437107  389201 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1204 23:11:33.437170  389201 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1204 23:11:33.437258  389201 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1204 23:11:33.437274  389201 kubeadm.go:310] 
	I1204 23:11:33.437411  389201 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1204 23:11:33.437541  389201 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1204 23:11:33.437553  389201 kubeadm.go:310] 
	I1204 23:11:33.437672  389201 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 6qxarj.88k5pjf3ytyfzen4 \
	I1204 23:11:33.437797  389201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e2721502eca5fe8af4d77f137e4406b90f31d1565f7dd87db91cf7b9fa1e9057 \
	I1204 23:11:33.437833  389201 kubeadm.go:310] 	--control-plane 
	I1204 23:11:33.437842  389201 kubeadm.go:310] 
	I1204 23:11:33.437945  389201 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1204 23:11:33.437954  389201 kubeadm.go:310] 
	I1204 23:11:33.438055  389201 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 6qxarj.88k5pjf3ytyfzen4 \
	I1204 23:11:33.438195  389201 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e2721502eca5fe8af4d77f137e4406b90f31d1565f7dd87db91cf7b9fa1e9057 
	I1204 23:11:33.438211  389201 cni.go:84] Creating CNI manager for ""
	I1204 23:11:33.438221  389201 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:11:33.439987  389201 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1204 23:11:33.441251  389201 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1204 23:11:33.445237  389201 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1204 23:11:33.445258  389201 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1204 23:11:33.462279  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1204 23:11:33.665861  389201 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1204 23:11:33.665944  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:33.665972  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-630093 minikube.k8s.io/updated_at=2024_12_04T23_11_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d minikube.k8s.io/name=addons-630093 minikube.k8s.io/primary=true
	I1204 23:11:33.673805  389201 ops.go:34] apiserver oom_adj: -16
	I1204 23:11:33.756672  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:34.256804  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:34.757586  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:35.256809  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:35.757274  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:36.256932  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:36.757774  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:37.257415  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:37.756756  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:38.256823  389201 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1204 23:11:38.333806  389201 kubeadm.go:1113] duration metric: took 4.667934536s to wait for elevateKubeSystemPrivileges
	I1204 23:11:38.333851  389201 kubeadm.go:394] duration metric: took 13.922029737s to StartCluster
	I1204 23:11:38.333875  389201 settings.go:142] acquiring lock: {Name:mke2b5bd7468e0e3a170be0f2243b433cdca2b2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:38.334020  389201 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:11:38.334556  389201 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/kubeconfig: {Name:mk53a4e908644f8dfb244bee65db94736a5dc52e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:11:38.334826  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1204 23:11:38.334847  389201 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1204 23:11:38.334940  389201 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1204 23:11:38.335050  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:38.335067  389201 addons.go:69] Setting yakd=true in profile "addons-630093"
	I1204 23:11:38.335086  389201 addons.go:234] Setting addon yakd=true in "addons-630093"
	I1204 23:11:38.335088  389201 addons.go:69] Setting inspektor-gadget=true in profile "addons-630093"
	I1204 23:11:38.335099  389201 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-630093"
	I1204 23:11:38.335108  389201 addons.go:69] Setting gcp-auth=true in profile "addons-630093"
	I1204 23:11:38.335116  389201 addons.go:234] Setting addon inspektor-gadget=true in "addons-630093"
	I1204 23:11:38.335118  389201 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-630093"
	I1204 23:11:38.335126  389201 mustload.go:65] Loading cluster: addons-630093
	I1204 23:11:38.335120  389201 addons.go:69] Setting storage-provisioner=true in profile "addons-630093"
	I1204 23:11:38.335142  389201 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-630093"
	I1204 23:11:38.335151  389201 addons.go:234] Setting addon storage-provisioner=true in "addons-630093"
	I1204 23:11:38.335142  389201 addons.go:69] Setting ingress=true in profile "addons-630093"
	I1204 23:11:38.335165  389201 addons.go:69] Setting ingress-dns=true in profile "addons-630093"
	I1204 23:11:38.335168  389201 addons.go:234] Setting addon ingress=true in "addons-630093"
	I1204 23:11:38.335170  389201 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-630093"
	I1204 23:11:38.335177  389201 addons.go:234] Setting addon ingress-dns=true in "addons-630093"
	I1204 23:11:38.335175  389201 addons.go:69] Setting metrics-server=true in profile "addons-630093"
	I1204 23:11:38.335186  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335187  389201 addons.go:234] Setting addon metrics-server=true in "addons-630093"
	I1204 23:11:38.335201  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335205  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335251  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335270  389201 config.go:182] Loaded profile config "addons-630093": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:11:38.335598  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335639  389201 addons.go:69] Setting registry=true in profile "addons-630093"
	I1204 23:11:38.335664  389201 addons.go:234] Setting addon registry=true in "addons-630093"
	I1204 23:11:38.335690  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335770  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335788  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335788  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335799  389201 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-630093"
	I1204 23:11:38.335865  389201 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-630093"
	I1204 23:11:38.335890  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.336127  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.336356  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335154  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335131  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.337395  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335166  389201 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-630093"
	I1204 23:11:38.337522  389201 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-630093"
	I1204 23:11:38.335779  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.337583  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335154  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.335618  389201 addons.go:69] Setting volcano=true in profile "addons-630093"
	I1204 23:11:38.337980  389201 addons.go:234] Setting addon volcano=true in "addons-630093"
	I1204 23:11:38.338050  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.338346  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.338511  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.338659  389201 out.go:177] * Verifying Kubernetes components...
	I1204 23:11:38.338743  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335079  389201 addons.go:69] Setting cloud-spanner=true in profile "addons-630093"
	I1204 23:11:38.339343  389201 addons.go:234] Setting addon cloud-spanner=true in "addons-630093"
	I1204 23:11:38.339416  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.342329  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.343246  389201 addons.go:69] Setting default-storageclass=true in profile "addons-630093"
	I1204 23:11:38.343284  389201 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-630093"
	I1204 23:11:38.343690  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.343795  389201 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1204 23:11:38.335605  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.335627  389201 addons.go:69] Setting volumesnapshots=true in profile "addons-630093"
	I1204 23:11:38.344127  389201 addons.go:234] Setting addon volumesnapshots=true in "addons-630093"
	I1204 23:11:38.344187  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.369102  389201 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1204 23:11:38.370392  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1204 23:11:38.370441  389201 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1204 23:11:38.370514  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.375367  389201 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1204 23:11:38.376764  389201 out.go:177]   - Using image docker.io/registry:2.8.3
	I1204 23:11:38.378315  389201 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1204 23:11:38.378339  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1204 23:11:38.378415  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.387789  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.390443  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.396264  389201 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1204 23:11:38.397739  389201 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:11:38.397765  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1204 23:11:38.397836  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.403885  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1204 23:11:38.404091  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.406664  389201 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1204 23:11:38.407794  389201 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1204 23:11:38.409084  389201 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1204 23:11:38.413429  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1204 23:11:38.413459  389201 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1204 23:11:38.413462  389201 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1204 23:11:38.413531  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.413533  389201 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:11:38.413544  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1204 23:11:38.413597  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.413711  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1204 23:11:38.413833  389201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:11:38.413845  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1204 23:11:38.413897  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.414878  389201 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:11:38.414894  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1204 23:11:38.414957  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.416261  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1204 23:11:38.418117  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1204 23:11:38.419304  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1204 23:11:38.420751  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1204 23:11:38.422006  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1204 23:11:38.423748  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1204 23:11:38.424837  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1204 23:11:38.424860  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1204 23:11:38.424941  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.430181  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1204 23:11:38.434134  389201 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1204 23:11:38.434699  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:38.435845  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1204 23:11:38.435868  389201 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1204 23:11:38.435951  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.438678  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:38.444191  389201 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:11:38.444221  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1204 23:11:38.444288  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.451026  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.452847  389201 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1204 23:11:38.454187  389201 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.33.0
	I1204 23:11:38.454245  389201 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1204 23:11:38.454263  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1204 23:11:38.454326  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.455564  389201 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1204 23:11:38.455600  389201 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1204 23:11:38.455669  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	W1204 23:11:38.458222  389201 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1204 23:11:38.462209  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.470069  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.470586  389201 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-630093"
	I1204 23:11:38.470686  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.471216  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.473482  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.476209  389201 addons.go:234] Setting addon default-storageclass=true in "addons-630093"
	I1204 23:11:38.476266  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:38.476733  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:38.477420  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.486737  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.488076  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.494091  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.494760  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.500157  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.514409  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.517053  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.526764  389201 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1204 23:11:38.528218  389201 out.go:177]   - Using image docker.io/busybox:stable
	I1204 23:11:38.529542  389201 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:11:38.529568  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1204 23:11:38.529635  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.532873  389201 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1204 23:11:38.532892  389201 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1204 23:11:38.532949  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:38.547794  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.550902  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:38.714491  389201 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1204 23:11:38.714590  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1204 23:11:38.730697  389201 node_ready.go:35] waiting up to 6m0s for node "addons-630093" to be "Ready" ...
	I1204 23:11:38.896083  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1204 23:11:38.896129  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1204 23:11:38.902650  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1204 23:11:38.903274  389201 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1204 23:11:38.903334  389201 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1204 23:11:38.908154  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1204 23:11:38.995367  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1204 23:11:38.996682  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1204 23:11:39.003953  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1204 23:11:39.003987  389201 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1204 23:11:39.009058  389201 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:11:39.009092  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14451 bytes)
	I1204 23:11:39.011952  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1204 23:11:39.015960  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1204 23:11:39.015992  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1204 23:11:39.095325  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1204 23:11:39.099215  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1204 23:11:39.107754  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1204 23:11:39.107787  389201 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1204 23:11:39.111656  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1204 23:11:39.199729  389201 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:11:39.199775  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1204 23:11:39.206060  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1204 23:11:39.206157  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1204 23:11:39.207660  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1204 23:11:39.313681  389201 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:11:39.313712  389201 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1204 23:11:39.315754  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1204 23:11:39.315836  389201 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1204 23:11:39.402197  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1204 23:11:39.402298  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1204 23:11:39.497285  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1204 23:11:39.613001  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1204 23:11:39.795499  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1204 23:11:39.795537  389201 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1204 23:11:39.908631  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1204 23:11:39.908730  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1204 23:11:40.110384  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1204 23:11:40.110490  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1204 23:11:40.203583  389201 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1204 23:11:40.203684  389201 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1204 23:11:40.302900  389201 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:11:40.302989  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1204 23:11:40.305736  389201 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.591107897s)
	I1204 23:11:40.305865  389201 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1204 23:11:40.415986  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.513233503s)
	I1204 23:11:40.516873  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1204 23:11:40.516909  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1204 23:11:40.606740  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1204 23:11:40.606836  389201 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1204 23:11:40.706038  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1204 23:11:41.013840  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.105639169s)
	I1204 23:11:41.019324  389201 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-630093" context rescaled to 1 replicas
	I1204 23:11:41.019970  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:41.098870  389201 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1204 23:11:41.098907  389201 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1204 23:11:41.103755  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (2.108338868s)
	I1204 23:11:41.296521  389201 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:41.296620  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1204 23:11:41.604186  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1204 23:11:41.604271  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1204 23:11:41.711584  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:41.895283  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1204 23:11:41.895375  389201 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1204 23:11:42.005218  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1204 23:11:42.005322  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1204 23:11:42.196571  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1204 23:11:42.196687  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1204 23:11:42.209452  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.212725161s)
	I1204 23:11:42.322610  389201 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:11:42.322752  389201 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1204 23:11:42.502862  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1204 23:11:42.809979  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.797973312s)
	I1204 23:11:42.810142  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.714779141s)
	I1204 23:11:43.015142  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.91582183s)
	I1204 23:11:43.300319  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:44.520283  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.40857896s)
	I1204 23:11:44.520372  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.02299016s)
	I1204 23:11:44.520392  389201 addons.go:475] Verifying addon ingress=true in "addons-630093"
	I1204 23:11:44.520419  389201 addons.go:475] Verifying addon registry=true in "addons-630093"
	I1204 23:11:44.520330  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.312579258s)
	I1204 23:11:44.520780  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (3.814712029s)
	I1204 23:11:44.520741  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.907702215s)
	I1204 23:11:44.521986  389201 addons.go:475] Verifying addon metrics-server=true in "addons-630093"
	I1204 23:11:44.522358  389201 out.go:177] * Verifying ingress addon...
	I1204 23:11:44.522391  389201 out.go:177] * Verifying registry addon...
	I1204 23:11:44.523305  389201 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-630093 service yakd-dashboard -n yakd-dashboard
	
	I1204 23:11:44.525119  389201 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1204 23:11:44.525119  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1204 23:11:44.600633  389201 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:11:44.600664  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:44.600855  389201 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1204 23:11:44.600872  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.030335  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:45.031111  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.524701  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.813019436s)
	W1204 23:11:45.524761  389201 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:11:45.524790  389201 retry.go:31] will retry after 181.865687ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1204 23:11:45.529400  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:45.529925  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:45.620284  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1204 23:11:45.620363  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:45.640586  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:45.707473  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1204 23:11:45.802964  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:45.916555  389201 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1204 23:11:45.999202  389201 addons.go:234] Setting addon gcp-auth=true in "addons-630093"
	I1204 23:11:45.999264  389201 host.go:66] Checking if "addons-630093" exists ...
	I1204 23:11:45.999784  389201 cli_runner.go:164] Run: docker container inspect addons-630093 --format={{.State.Status}}
	I1204 23:11:46.028530  389201 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1204 23:11:46.028595  389201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-630093
	I1204 23:11:46.031316  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:46.031818  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:46.049437  389201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/addons-630093/id_rsa Username:docker}
	I1204 23:11:46.408520  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.905505829s)
	I1204 23:11:46.408572  389201 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-630093"
	I1204 23:11:46.410390  389201 out.go:177] * Verifying csi-hostpath-driver addon...
	I1204 23:11:46.413226  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1204 23:11:46.423132  389201 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:11:46.423158  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:46.530521  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:46.530917  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:46.918004  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:47.028913  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:47.029388  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:47.417466  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:47.531801  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:47.532309  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:47.916654  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:48.028517  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:48.029048  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:48.236314  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:48.416588  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:48.528958  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:48.529570  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:48.735256  389201 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.027721867s)
	I1204 23:11:48.735290  389201 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.706722291s)
	I1204 23:11:48.737269  389201 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1204 23:11:48.738737  389201 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1204 23:11:48.739945  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1204 23:11:48.739962  389201 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1204 23:11:48.757606  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1204 23:11:48.757640  389201 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1204 23:11:48.774462  389201 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:11:48.774491  389201 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1204 23:11:48.791359  389201 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1204 23:11:48.917479  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:49.028378  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:49.028791  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:49.119035  389201 addons.go:475] Verifying addon gcp-auth=true in "addons-630093"
	I1204 23:11:49.120662  389201 out.go:177] * Verifying gcp-auth addon...
	I1204 23:11:49.123168  389201 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1204 23:11:49.127558  389201 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1204 23:11:49.127594  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:49.417311  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:49.529241  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:49.529771  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:49.626790  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:49.917626  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:50.028348  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:50.028726  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:50.128054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:50.417233  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:50.529158  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:50.529580  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:50.627050  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:50.734676  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:50.917259  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:51.029147  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:51.029767  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:51.126874  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:51.417238  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:51.529239  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:51.529661  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:51.627160  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:51.916950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:52.028762  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:52.029207  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:52.127128  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:52.417313  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:52.529136  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:52.529632  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:52.626885  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:52.917040  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:53.028643  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:53.029069  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:53.126271  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:53.233877  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:53.417285  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:53.529030  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:53.529451  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:53.626877  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:53.917489  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:54.029327  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:54.029771  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:54.127217  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:54.416734  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:54.528697  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:54.529051  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:54.626826  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:54.916888  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:55.028438  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:55.028959  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:55.126396  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:55.234291  389201 node_ready.go:53] node "addons-630093" has status "Ready":"False"
	I1204 23:11:55.417202  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:55.528962  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:55.529441  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:55.626790  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:55.917367  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:56.028910  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:56.029339  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:56.127003  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:56.416550  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:56.528268  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:56.528637  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:56.626903  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:56.917742  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:57.028644  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:57.029259  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:57.126655  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:57.417402  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:57.528943  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:57.529266  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:57.626610  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:57.802859  389201 node_ready.go:49] node "addons-630093" has status "Ready":"True"
	I1204 23:11:57.802968  389201 node_ready.go:38] duration metric: took 19.072220894s for node "addons-630093" to be "Ready" ...
	I1204 23:11:57.803001  389201 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:11:57.812284  389201 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace to be "Ready" ...
	I1204 23:11:57.918256  389201 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1204 23:11:57.918288  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:58.028987  389201 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1204 23:11:58.029025  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:58.029163  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:58.128052  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:58.418190  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:58.529517  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:58.529923  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:58.627312  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:58.919346  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:59.029950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:59.030369  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:59.127570  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:59.418251  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:11:59.530785  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:11:59.531584  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:11:59.630759  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:11:59.818327  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:11:59.918676  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:00.030531  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:00.030960  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:00.127203  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:00.418498  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:00.529214  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:00.529347  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:00.626705  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:00.919036  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:01.029541  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:01.029735  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:01.127079  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:01.417804  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:01.529706  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:01.530306  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:01.626425  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:01.818875  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:01.918913  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:02.029895  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:02.030382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:02.127260  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:02.423666  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:02.529870  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:02.530595  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:02.627705  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:02.918184  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:03.096822  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:03.098279  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:03.126704  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:03.418293  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:03.530189  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:03.531307  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:03.626994  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:03.819175  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:03.919019  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:04.029490  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:04.030689  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:04.127527  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:04.418611  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:04.529829  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:04.530049  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:04.627138  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:04.918884  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:05.029547  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:05.030544  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:05.127501  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:05.418586  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:05.529727  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:05.530098  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:05.629968  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:05.819250  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:05.917895  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:06.030341  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:06.030532  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:06.130159  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:06.417534  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:06.529640  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:06.529905  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:06.626512  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:06.918521  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:07.029270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:07.029688  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:07.127053  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:07.417502  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:07.529692  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:07.530328  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:07.629361  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:07.917534  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:08.029222  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:08.029469  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:08.127082  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:08.319034  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:08.419261  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:08.529942  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:08.530672  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:08.627267  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:08.917968  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:09.029951  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:09.030163  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:09.126878  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:09.418269  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:09.529306  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:09.529537  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:09.627199  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:09.918335  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:10.029495  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:10.029837  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:10.127443  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:10.319436  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:10.418755  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:10.529622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:10.529807  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:10.626252  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:10.917779  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:11.030059  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:11.030182  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:11.127180  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:11.419556  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:11.530723  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:11.531122  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:11.626618  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:11.918234  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:12.029550  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:12.029678  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:12.127740  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:12.418986  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:12.530019  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:12.530137  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:12.630114  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:12.819093  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:12.918200  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:13.029270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:13.029507  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:13.127361  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:13.418296  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:13.528977  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:13.529560  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:13.629701  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:13.918107  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:14.028623  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:14.029060  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:14.126995  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:14.417833  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:14.601066  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:14.601685  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:14.700398  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:14.819539  389201 pod_ready.go:103] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:14.918753  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:15.029149  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:15.029311  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:15.127355  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:15.417956  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:15.530046  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:15.530173  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:15.626804  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:15.817465  389201 pod_ready.go:93] pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.817493  389201 pod_ready.go:82] duration metric: took 18.005165509s for pod "amd-gpu-device-plugin-xfdff" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.817504  389201 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.822063  389201 pod_ready.go:93] pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.822085  389201 pod_ready.go:82] duration metric: took 4.574786ms for pod "coredns-7c65d6cfc9-nvslc" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.822105  389201 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.826436  389201 pod_ready.go:93] pod "etcd-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.826459  389201 pod_ready.go:82] duration metric: took 4.348229ms for pod "etcd-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.826472  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.831213  389201 pod_ready.go:93] pod "kube-apiserver-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.831241  389201 pod_ready.go:82] duration metric: took 4.762165ms for pod "kube-apiserver-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.831254  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.835452  389201 pod_ready.go:93] pod "kube-controller-manager-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:15.835474  389201 pod_ready.go:82] duration metric: took 4.212413ms for pod "kube-controller-manager-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.835486  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k9l4p" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:15.918128  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:16.028729  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:16.029367  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:16.127315  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:16.216237  389201 pod_ready.go:93] pod "kube-proxy-k9l4p" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:16.216263  389201 pod_ready.go:82] duration metric: took 380.769812ms for pod "kube-proxy-k9l4p" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.216274  389201 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.417739  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:16.529747  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:16.530393  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:16.615744  389201 pod_ready.go:93] pod "kube-scheduler-addons-630093" in "kube-system" namespace has status "Ready":"True"
	I1204 23:12:16.615777  389201 pod_ready.go:82] duration metric: took 399.4948ms for pod "kube-scheduler-addons-630093" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.615792  389201 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace to be "Ready" ...
	I1204 23:12:16.629644  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:16.918480  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:17.029640  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:17.030079  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:17.127575  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:17.418114  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:17.528932  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:17.530075  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:17.704033  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:17.998609  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:18.099865  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:18.100201  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:18.197667  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:18.418883  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:18.599572  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:18.600671  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:18.701570  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:18.703573  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:18.920015  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:19.100730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:19.102395  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:19.198834  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:19.418509  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:19.529727  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:19.530383  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:19.626273  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:19.918805  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:20.029240  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:20.029932  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:20.126903  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:20.418249  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:20.529801  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:20.530308  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:20.626097  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:20.918878  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:21.029289  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:21.029519  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:21.122606  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:21.126039  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:21.418484  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:21.529710  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:21.530710  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:21.626146  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:21.918962  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:22.029458  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:22.029740  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:22.127214  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:22.419474  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:22.530071  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:22.530666  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:22.626757  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:22.919558  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:23.030183  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:23.030603  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:23.126737  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:23.419160  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:23.530176  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:23.530357  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:23.622846  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:23.626203  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:23.918700  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:24.028728  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:24.028982  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:24.126654  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:24.417980  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:24.530135  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:24.531100  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:24.627054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:24.918427  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:25.028887  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:25.029218  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:25.126097  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:25.418781  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:25.529648  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:25.529792  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:25.625375  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:25.918175  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:26.029449  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:26.029717  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:26.121949  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:26.125965  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:26.418478  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:26.529251  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:26.529458  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:26.626865  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:26.918569  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:27.029067  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:27.030277  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:27.125626  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:27.418385  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:27.528662  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:27.529405  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:27.628474  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:27.917874  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:28.029704  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:28.029928  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:28.122056  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:28.126396  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:28.419714  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:28.529079  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:28.529300  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:28.628622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:28.918659  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:29.028740  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:29.029352  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:29.126050  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:29.417959  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:29.529472  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:29.530620  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:29.629092  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:29.919400  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:30.030302  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:30.030514  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:30.122668  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:30.126280  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:30.418540  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:30.529288  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:30.529642  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:30.626549  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:30.918094  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:31.028726  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:31.029185  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:31.127032  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:31.418917  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:31.529225  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:31.529895  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:31.626376  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:31.917674  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:32.029127  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:32.029446  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:32.126980  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:32.418178  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:32.529226  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:32.529801  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:32.622787  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:32.629901  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:32.918843  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:33.029651  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:33.029732  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:33.126752  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:33.417866  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:33.529615  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:33.529803  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:33.626861  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:33.918296  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:34.029295  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:34.029827  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:34.126281  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:34.418699  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:34.529505  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1204 23:12:34.529651  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:34.642845  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.016246  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:35.029633  389201 kapi.go:107] duration metric: took 50.504509788s to wait for kubernetes.io/minikube-addons=registry ...
	I1204 23:12:35.030572  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:35.122008  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:35.126344  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.418953  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:35.529492  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:35.629301  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:35.917990  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:36.029160  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:36.126923  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:36.418071  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:36.530620  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:36.626415  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:36.918072  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:37.030355  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:37.122395  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:37.130220  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:37.418413  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:37.528927  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:37.625990  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:37.918227  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:38.029187  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:38.126369  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:38.417932  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:38.598800  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:38.697192  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:38.919507  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:39.029934  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:39.126608  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:39.417800  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:39.529782  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:39.621784  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:39.626154  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:39.918849  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:40.030159  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:40.126095  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:40.418225  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:40.531480  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:40.626066  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:40.922455  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:41.030073  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:41.132353  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:41.419213  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:41.530198  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:41.623990  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:41.626185  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:41.918285  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:42.029080  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:42.126525  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:42.417894  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:42.530073  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:42.628888  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:42.917931  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:43.029806  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:43.129456  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:43.417942  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:43.530219  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:43.626382  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:43.919862  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:44.030101  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:44.121891  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:44.126376  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:44.418428  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:44.529385  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:44.626961  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:44.918331  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:45.029815  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.130119  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:45.418987  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:45.530112  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:45.626679  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:45.917695  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.030308  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.122743  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:46.125898  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:46.418369  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:46.530377  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:46.626026  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:46.919590  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.029382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.126945  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:47.418103  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:47.529610  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:47.626586  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:47.918784  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.030793  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.123333  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:48.125995  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.418085  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1204 23:12:48.529161  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:48.625851  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:48.918833  389201 kapi.go:107] duration metric: took 1m2.505604843s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1204 23:12:49.029518  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.126520  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:49.529429  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:49.626178  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.028779  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.126359  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:50.529535  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:50.621344  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:50.626657  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.029711  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.126167  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:51.528977  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:51.625730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.029401  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.126687  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:52.529779  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:52.622444  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:52.626730  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.029789  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.125660  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:53.529648  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:53.625950  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.029567  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.126564  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:54.529619  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:54.626519  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.029917  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.121799  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:55.125909  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:55.530199  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:55.626324  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.029734  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.125940  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:56.529705  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:56.626054  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.072272  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.122241  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:57.126623  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:57.529316  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:57.626270  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.029340  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.126509  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:58.529559  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:58.626455  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.029135  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.126845  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:12:59.529933  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:12:59.621754  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:12:59.625881  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.029773  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.126622  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:00.529528  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:00.626582  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.029576  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.127058  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:01.530191  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:01.622552  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:01.626939  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.030598  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.130438  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:02.529743  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:02.626141  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.030953  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.149927  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:03.529333  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:03.622858  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:03.626677  389201 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1204 23:13:04.029338  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:04.128963  389201 kapi.go:107] duration metric: took 1m15.005791002s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1204 23:13:04.130952  389201 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-630093 cluster.
	I1204 23:13:04.132630  389201 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1204 23:13:04.134066  389201 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1204 23:13:04.599921  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.100341  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.599382  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:05.623902  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:06.029904  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:06.529164  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.029826  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:07.531039  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.030122  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:08.123005  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:08.529214  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.029839  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:09.529349  389201 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1204 23:13:10.030137  389201 kapi.go:107] duration metric: took 1m25.505015693s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1204 23:13:10.032415  389201 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner, cloud-spanner, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1204 23:13:10.034021  389201 addons.go:510] duration metric: took 1m31.699072904s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin default-storageclass ingress-dns storage-provisioner cloud-spanner storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1204 23:13:10.622508  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:13.121894  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:15.622516  389201 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"False"
	I1204 23:13:18.122616  389201 pod_ready.go:93] pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:18.122655  389201 pod_ready.go:82] duration metric: took 1m1.506852695s for pod "metrics-server-84c5f94fbc-vtkhx" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.122671  389201 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.127666  389201 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace has status "Ready":"True"
	I1204 23:13:18.127689  389201 pod_ready.go:82] duration metric: took 5.009056ms for pod "nvidia-device-plugin-daemonset-rj8jd" in "kube-system" namespace to be "Ready" ...
	I1204 23:13:18.127712  389201 pod_ready.go:39] duration metric: took 1m20.324660399s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1204 23:13:18.127736  389201 api_server.go:52] waiting for apiserver process to appear ...
	I1204 23:13:18.127773  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:18.127852  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:18.163496  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:18.163523  389201 cri.go:89] found id: ""
	I1204 23:13:18.163535  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:18.163604  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.167359  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:18.167448  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:18.204556  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:18.204586  389201 cri.go:89] found id: ""
	I1204 23:13:18.204598  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:18.204666  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.208385  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:18.208480  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:18.243732  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:18.243758  389201 cri.go:89] found id: ""
	I1204 23:13:18.243766  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:18.243825  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.247475  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:18.247549  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:18.284446  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:18.284481  389201 cri.go:89] found id: ""
	I1204 23:13:18.284494  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:18.284553  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.288056  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:18.288154  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:18.322998  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:18.323035  389201 cri.go:89] found id: ""
	I1204 23:13:18.323071  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:18.323127  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.326560  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:18.326662  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:18.360672  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:18.360695  389201 cri.go:89] found id: ""
	I1204 23:13:18.360704  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:18.360759  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.364394  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:18.364465  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:18.398753  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:18.398779  389201 cri.go:89] found id: ""
	I1204 23:13:18.398788  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:18.398837  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:18.402272  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:18.402308  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:18.480499  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:18.480540  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:18.524595  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:18.524634  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:18.566986  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:18.567027  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:18.602070  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:18.602102  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:18.658618  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:18.658684  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:18.696622  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:18.696664  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:18.740640  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:18.740679  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:18.779439  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.779629  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.791512  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.791674  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.791800  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.791953  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792143  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792315  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792450  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792613  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.792743  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.792901  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.793033  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.793194  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:18.793332  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:18.793495  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:18.826225  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:18.826269  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:18.853723  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:18.853768  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:18.956948  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:18.956987  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:19.002234  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:19.002271  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:19.041497  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:19.041531  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:19.041595  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:19.041609  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:19.041619  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:19.041628  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:19.041636  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:19.041642  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:19.041649  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:19.041654  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:29.043089  389201 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:13:29.058130  389201 api_server.go:72] duration metric: took 1m50.723247239s to wait for apiserver process to appear ...
	I1204 23:13:29.058169  389201 api_server.go:88] waiting for apiserver healthz status ...
	I1204 23:13:29.058217  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:29.058262  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:29.093177  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:29.093208  389201 cri.go:89] found id: ""
	I1204 23:13:29.093217  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:29.093301  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.096893  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:29.096964  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:29.132522  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:29.132544  389201 cri.go:89] found id: ""
	I1204 23:13:29.132554  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:29.132596  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.136114  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:29.136174  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:29.171816  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:29.171839  389201 cri.go:89] found id: ""
	I1204 23:13:29.171850  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:29.171897  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.175512  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:29.175584  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:29.212035  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:29.212060  389201 cri.go:89] found id: ""
	I1204 23:13:29.212069  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:29.212116  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.215601  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:29.215669  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:29.251281  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:29.251304  389201 cri.go:89] found id: ""
	I1204 23:13:29.251312  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:29.251358  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.255228  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:29.255342  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:29.290460  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:29.290486  389201 cri.go:89] found id: ""
	I1204 23:13:29.290496  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:29.290559  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.294114  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:29.294191  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:29.330311  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:29.330336  389201 cri.go:89] found id: ""
	I1204 23:13:29.330346  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:29.330396  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:29.333992  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:29.334023  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:29.368566  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:29.368596  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:29.402199  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:29.402229  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:29.482290  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:29.482339  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:29.510099  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:29.510142  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:29.615012  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:29.615047  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:29.660921  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:29.660962  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:29.704015  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:29.704060  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:29.747065  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:29.747100  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:29.827553  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.827776  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.839459  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.839672  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.839847  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840075  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.840275  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840505  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.840699  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.840936  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.841134  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.841361  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.841560  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.841791  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:29.842000  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:29.842238  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:29.875377  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:29.875420  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:29.915909  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:29.915942  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:29.975760  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:29.975799  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:30.020004  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:30.020036  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:30.020104  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:30.020121  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:30.020132  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:30.020149  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:30.020164  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:30.020176  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:30.020187  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:30.020199  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:40.021029  389201 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1204 23:13:40.025015  389201 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1204 23:13:40.026016  389201 api_server.go:141] control plane version: v1.31.2
	I1204 23:13:40.026045  389201 api_server.go:131] duration metric: took 10.967868289s to wait for apiserver health ...
	I1204 23:13:40.026053  389201 system_pods.go:43] waiting for kube-system pods to appear ...
	I1204 23:13:40.026087  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1204 23:13:40.026139  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1204 23:13:40.061619  389201 cri.go:89] found id: "697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:40.061656  389201 cri.go:89] found id: ""
	I1204 23:13:40.061667  389201 logs.go:282] 1 containers: [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f]
	I1204 23:13:40.061726  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.065276  389201 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1204 23:13:40.065347  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1204 23:13:40.099762  389201 cri.go:89] found id: "249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:40.099784  389201 cri.go:89] found id: ""
	I1204 23:13:40.099791  389201 logs.go:282] 1 containers: [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1]
	I1204 23:13:40.099846  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.103315  389201 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1204 23:13:40.103376  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1204 23:13:40.138517  389201 cri.go:89] found id: "1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:40.138548  389201 cri.go:89] found id: ""
	I1204 23:13:40.138558  389201 logs.go:282] 1 containers: [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2]
	I1204 23:13:40.138608  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.142278  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1204 23:13:40.142338  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1204 23:13:40.177139  389201 cri.go:89] found id: "f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:40.177162  389201 cri.go:89] found id: ""
	I1204 23:13:40.177169  389201 logs.go:282] 1 containers: [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc]
	I1204 23:13:40.177224  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.180724  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1204 23:13:40.180787  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1204 23:13:40.215881  389201 cri.go:89] found id: "76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:40.215909  389201 cri.go:89] found id: ""
	I1204 23:13:40.215921  389201 logs.go:282] 1 containers: [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d]
	I1204 23:13:40.215978  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.219605  389201 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1204 23:13:40.219672  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1204 23:13:40.254791  389201 cri.go:89] found id: "c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:40.254818  389201 cri.go:89] found id: ""
	I1204 23:13:40.254830  389201 logs.go:282] 1 containers: [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec]
	I1204 23:13:40.254883  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.258537  389201 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1204 23:13:40.258600  389201 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1204 23:13:40.293449  389201 cri.go:89] found id: "f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:40.293476  389201 cri.go:89] found id: ""
	I1204 23:13:40.293486  389201 logs.go:282] 1 containers: [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac]
	I1204 23:13:40.293542  389201 ssh_runner.go:195] Run: which crictl
	I1204 23:13:40.297150  389201 logs.go:123] Gathering logs for CRI-O ...
	I1204 23:13:40.297182  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1204 23:13:40.372794  389201 logs.go:123] Gathering logs for container status ...
	I1204 23:13:40.372843  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1204 23:13:40.419461  389201 logs.go:123] Gathering logs for describe nodes ...
	I1204 23:13:40.419498  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1204 23:13:40.534097  389201 logs.go:123] Gathering logs for etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] ...
	I1204 23:13:40.534131  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1"
	I1204 23:13:40.578901  389201 logs.go:123] Gathering logs for coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] ...
	I1204 23:13:40.578941  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2"
	I1204 23:13:40.616890  389201 logs.go:123] Gathering logs for kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] ...
	I1204 23:13:40.616923  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec"
	I1204 23:13:40.676313  389201 logs.go:123] Gathering logs for kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] ...
	I1204 23:13:40.676354  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d"
	I1204 23:13:40.712137  389201 logs.go:123] Gathering logs for kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] ...
	I1204 23:13:40.712171  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac"
	I1204 23:13:40.749253  389201 logs.go:123] Gathering logs for kubelet ...
	I1204 23:13:40.749283  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1204 23:13:40.793451  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: W1204 23:11:38.340569    1643 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.793680  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:38 addons-630093 kubelet[1643]: E1204 23:11:38.340638    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805200  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658654    1643 reflector.go:561] object-"kube-system"/"gcp-auth": failed to list *v1.Secret: secrets "gcp-auth" is forbidden: User "system:node:addons-630093" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.805392  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658718    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"gcp-auth\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"gcp-auth\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805575  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.658773    1643 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.805790  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.658814    1643 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.805984  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661330    1643 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.806212  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661384    1643 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.806412  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661600    1643 reflector.go:561] object-"local-path-storage"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.806670  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661632    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.806884  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661689    1643 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807109  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.807303  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807526  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.807722  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.807952  389201 logs.go:138] Found kubelet problem: Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:40.842035  389201 logs.go:123] Gathering logs for dmesg ...
	I1204 23:13:40.842083  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1204 23:13:40.868911  389201 logs.go:123] Gathering logs for kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] ...
	I1204 23:13:40.868949  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f"
	I1204 23:13:40.915327  389201 logs.go:123] Gathering logs for kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] ...
	I1204 23:13:40.915367  389201 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc"
	I1204 23:13:40.958116  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:40.958151  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1204 23:13:40.958253  389201 out.go:270] X Problems detected in kubelet:
	W1204 23:13:40.958268  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661706    1643 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.958278  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661862    1643 reflector.go:561] object-"yakd-dashboard"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "yakd-dashboard": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.958294  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661888    1643 reflector.go:158] "Unhandled Error" err="object-\"yakd-dashboard\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"yakd-dashboard\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	W1204 23:13:40.958308  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: W1204 23:11:57.661952    1643 reflector.go:561] object-"local-path-storage"/"local-path-config": failed to list *v1.ConfigMap: configmaps "local-path-config" is forbidden: User "system:node:addons-630093" cannot list resource "configmaps" in API group "" in the namespace "local-path-storage": no relationship found between node 'addons-630093' and this object
	W1204 23:13:40.958323  389201 out.go:270]   Dec 04 23:11:57 addons-630093 kubelet[1643]: E1204 23:11:57.661968    1643 reflector.go:158] "Unhandled Error" err="object-\"local-path-storage\"/\"local-path-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"local-path-config\" is forbidden: User \"system:node:addons-630093\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-630093' and this object" logger="UnhandledError"
	I1204 23:13:40.958329  389201 out.go:358] Setting ErrFile to fd 2...
	I1204 23:13:40.958338  389201 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:13:50.969322  389201 system_pods.go:59] 19 kube-system pods found
	I1204 23:13:50.969358  389201 system_pods.go:61] "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
	I1204 23:13:50.969363  389201 system_pods.go:61] "coredns-7c65d6cfc9-nvslc" [e12dda0f-2d10-4096-b12f-73bd871cc18e] Running
	I1204 23:13:50.969368  389201 system_pods.go:61] "csi-hostpath-attacher-0" [af4d7f93-4989-4c1d-8c89-43d0e74f1a44] Running
	I1204 23:13:50.969372  389201 system_pods.go:61] "csi-hostpath-resizer-0" [5198084f-6ce5-4b12-89f8-5d8a76057764] Running
	I1204 23:13:50.969375  389201 system_pods.go:61] "csi-hostpathplugin-97jlr" [1d17a273-85e7-4f77-9bbe-7786a88d0ebe] Running
	I1204 23:13:50.969379  389201 system_pods.go:61] "etcd-addons-630093" [7758ddc9-6dfb-4fe8-a37f-1ef8170cd720] Running
	I1204 23:13:50.969382  389201 system_pods.go:61] "kindnet-sklhp" [a2a719ef-fccf-456e-88ac-b6e5fad34e3e] Running
	I1204 23:13:50.969387  389201 system_pods.go:61] "kube-apiserver-addons-630093" [34402f18-4ebe-4e53-9495-549544e9f70c] Running
	I1204 23:13:50.969393  389201 system_pods.go:61] "kube-controller-manager-addons-630093" [e33f5809-04da-4fb0-8265-2e29e7f90e15] Running
	I1204 23:13:50.969408  389201 system_pods.go:61] "kube-ingress-dns-minikube" [4cda5680-90e6-43e2-b35f-bf0976f6fef3] Running
	I1204 23:13:50.969415  389201 system_pods.go:61] "kube-proxy-k9l4p" [bddbd74f-1a8f-4181-b2f7-decc74059f10] Running
	I1204 23:13:50.969420  389201 system_pods.go:61] "kube-scheduler-addons-630093" [1f496311-6985-4c79-a19a-4ceade68e41e] Running
	I1204 23:13:50.969429  389201 system_pods.go:61] "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
	I1204 23:13:50.969434  389201 system_pods.go:61] "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
	I1204 23:13:50.969441  389201 system_pods.go:61] "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
	I1204 23:13:50.969444  389201 system_pods.go:61] "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
	I1204 23:13:50.969453  389201 system_pods.go:61] "snapshot-controller-56fcc65765-2492d" [a604be0a-c061-4a65-9d32-0b98fff12222] Running
	I1204 23:13:50.969458  389201 system_pods.go:61] "snapshot-controller-56fcc65765-xtclh" [845fd71c-634d-41e2-a101-08a0c1458418] Running
	I1204 23:13:50.969461  389201 system_pods.go:61] "storage-provisioner" [cde6de53-e600-4898-a1c3-df78f4d4e6ff] Running
	I1204 23:13:50.969470  389201 system_pods.go:74] duration metric: took 10.943410983s to wait for pod list to return data ...
	I1204 23:13:50.969480  389201 default_sa.go:34] waiting for default service account to be created ...
	I1204 23:13:50.972205  389201 default_sa.go:45] found service account: "default"
	I1204 23:13:50.972229  389201 default_sa.go:55] duration metric: took 2.740927ms for default service account to be created ...
	I1204 23:13:50.972237  389201 system_pods.go:116] waiting for k8s-apps to be running ...
	I1204 23:13:50.980831  389201 system_pods.go:86] 19 kube-system pods found
	I1204 23:13:50.980861  389201 system_pods.go:89] "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
	I1204 23:13:50.980867  389201 system_pods.go:89] "coredns-7c65d6cfc9-nvslc" [e12dda0f-2d10-4096-b12f-73bd871cc18e] Running
	I1204 23:13:50.980872  389201 system_pods.go:89] "csi-hostpath-attacher-0" [af4d7f93-4989-4c1d-8c89-43d0e74f1a44] Running
	I1204 23:13:50.980876  389201 system_pods.go:89] "csi-hostpath-resizer-0" [5198084f-6ce5-4b12-89f8-5d8a76057764] Running
	I1204 23:13:50.980880  389201 system_pods.go:89] "csi-hostpathplugin-97jlr" [1d17a273-85e7-4f77-9bbe-7786a88d0ebe] Running
	I1204 23:13:50.980883  389201 system_pods.go:89] "etcd-addons-630093" [7758ddc9-6dfb-4fe8-a37f-1ef8170cd720] Running
	I1204 23:13:50.980887  389201 system_pods.go:89] "kindnet-sklhp" [a2a719ef-fccf-456e-88ac-b6e5fad34e3e] Running
	I1204 23:13:50.980891  389201 system_pods.go:89] "kube-apiserver-addons-630093" [34402f18-4ebe-4e53-9495-549544e9f70c] Running
	I1204 23:13:50.980895  389201 system_pods.go:89] "kube-controller-manager-addons-630093" [e33f5809-04da-4fb0-8265-2e29e7f90e15] Running
	I1204 23:13:50.980899  389201 system_pods.go:89] "kube-ingress-dns-minikube" [4cda5680-90e6-43e2-b35f-bf0976f6fef3] Running
	I1204 23:13:50.980905  389201 system_pods.go:89] "kube-proxy-k9l4p" [bddbd74f-1a8f-4181-b2f7-decc74059f10] Running
	I1204 23:13:50.980910  389201 system_pods.go:89] "kube-scheduler-addons-630093" [1f496311-6985-4c79-a19a-4ceade68e41e] Running
	I1204 23:13:50.980914  389201 system_pods.go:89] "metrics-server-84c5f94fbc-vtkhx" [cec44a14-191c-4123-b802-68a2c04f883d] Running
	I1204 23:13:50.980920  389201 system_pods.go:89] "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
	I1204 23:13:50.980926  389201 system_pods.go:89] "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
	I1204 23:13:50.980929  389201 system_pods.go:89] "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
	I1204 23:13:50.980933  389201 system_pods.go:89] "snapshot-controller-56fcc65765-2492d" [a604be0a-c061-4a65-9d32-0b98fff12222] Running
	I1204 23:13:50.980939  389201 system_pods.go:89] "snapshot-controller-56fcc65765-xtclh" [845fd71c-634d-41e2-a101-08a0c1458418] Running
	I1204 23:13:50.980943  389201 system_pods.go:89] "storage-provisioner" [cde6de53-e600-4898-a1c3-df78f4d4e6ff] Running
	I1204 23:13:50.980952  389201 system_pods.go:126] duration metric: took 8.709075ms to wait for k8s-apps to be running ...
	I1204 23:13:50.980961  389201 system_svc.go:44] waiting for kubelet service to be running ....
	I1204 23:13:50.981009  389201 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:13:50.992805  389201 system_svc.go:56] duration metric: took 11.832695ms WaitForService to wait for kubelet
	I1204 23:13:50.992839  389201 kubeadm.go:582] duration metric: took 2m12.65796392s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1204 23:13:50.992860  389201 node_conditions.go:102] verifying NodePressure condition ...
	I1204 23:13:50.996391  389201 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1204 23:13:50.996430  389201 node_conditions.go:123] node cpu capacity is 8
	I1204 23:13:50.996447  389201 node_conditions.go:105] duration metric: took 3.580009ms to run NodePressure ...
	I1204 23:13:50.996463  389201 start.go:241] waiting for startup goroutines ...
	I1204 23:13:50.996483  389201 start.go:246] waiting for cluster config update ...
	I1204 23:13:50.996508  389201 start.go:255] writing updated cluster config ...
	I1204 23:13:50.996891  389201 ssh_runner.go:195] Run: rm -f paused
	I1204 23:13:51.048677  389201 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1204 23:13:51.051940  389201 out.go:177] * Done! kubectl is now configured to use "addons-630093" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 04 23:18:10 addons-630093 crio[1031]: time="2024-12-04 23:18:10.811847057Z" level=info msg="Image docker.io/nginx:latest not found" id=23ab448b-5a2e-40d6-a776-4e85cb224673 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:25 addons-630093 crio[1031]: time="2024-12-04 23:18:25.810669947Z" level=info msg="Checking image status: docker.io/nginx:latest" id=e3fb6689-a759-4b85-867e-92f6eebd1d71 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:25 addons-630093 crio[1031]: time="2024-12-04 23:18:25.811144914Z" level=info msg="Image docker.io/nginx:latest not found" id=e3fb6689-a759-4b85-867e-92f6eebd1d71 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:28 addons-630093 crio[1031]: time="2024-12-04 23:18:28.891897348Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=208b5305-2a77-4200-8ef4-13d23afd85ec name=/runtime.v1.ImageService/PullImage
	Dec 04 23:18:28 addons-630093 crio[1031]: time="2024-12-04 23:18:28.908369549Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 04 23:18:40 addons-630093 crio[1031]: time="2024-12-04 23:18:40.811584181Z" level=info msg="Checking image status: docker.io/nginx:latest" id=148c2788-46bb-42c0-b1d7-75b89603a9fc name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:40 addons-630093 crio[1031]: time="2024-12-04 23:18:40.811604015Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=8c378a36-51ea-45aa-b138-de29209946b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:40 addons-630093 crio[1031]: time="2024-12-04 23:18:40.811866163Z" level=info msg="Image docker.io/nginx:latest not found" id=148c2788-46bb-42c0-b1d7-75b89603a9fc name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:40 addons-630093 crio[1031]: time="2024-12-04 23:18:40.811956334Z" level=info msg="Image docker.io/nginx:alpine not found" id=8c378a36-51ea-45aa-b138-de29209946b1 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:53 addons-630093 crio[1031]: time="2024-12-04 23:18:53.810764245Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=6f76e6a0-dd92-463a-8ff1-401003d7ade5 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:53 addons-630093 crio[1031]: time="2024-12-04 23:18:53.811028333Z" level=info msg="Image docker.io/nginx:alpine not found" id=6f76e6a0-dd92-463a-8ff1-401003d7ade5 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:59 addons-630093 crio[1031]: time="2024-12-04 23:18:59.530849835Z" level=info msg="Pulling image: docker.io/nginx:latest" id=4333e13d-d0b1-4e88-bf0f-ee35ef791fc3 name=/runtime.v1.ImageService/PullImage
	Dec 04 23:18:59 addons-630093 crio[1031]: time="2024-12-04 23:18:59.534910804Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 04 23:18:59 addons-630093 crio[1031]: time="2024-12-04 23:18:59.646955039Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=f9939493-f27e-4d1f-a811-a02c3e8752fe name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:18:59 addons-630093 crio[1031]: time="2024-12-04 23:18:59.647299891Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=f9939493-f27e-4d1f-a811-a02c3e8752fe name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:08 addons-630093 crio[1031]: time="2024-12-04 23:19:08.811285885Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3f58bf79-6103-4a71-908c-dc9be5479a2e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:08 addons-630093 crio[1031]: time="2024-12-04 23:19:08.811523532Z" level=info msg="Image docker.io/nginx:alpine not found" id=3f58bf79-6103-4a71-908c-dc9be5479a2e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:11 addons-630093 crio[1031]: time="2024-12-04 23:19:11.811536416Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e3cdaa3c-c3f6-44fe-96a1-7d5026b8622e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:11 addons-630093 crio[1031]: time="2024-12-04 23:19:11.811778295Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=e3cdaa3c-c3f6-44fe-96a1-7d5026b8622e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:19 addons-630093 crio[1031]: time="2024-12-04 23:19:19.811558458Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=3d082061-6fb0-43a9-a133-9cb2abe70d86 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:19 addons-630093 crio[1031]: time="2024-12-04 23:19:19.811782348Z" level=info msg="Image docker.io/nginx:alpine not found" id=3d082061-6fb0-43a9-a133-9cb2abe70d86 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:30 addons-630093 crio[1031]: time="2024-12-04 23:19:30.146500801Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=fdbdcafd-ac5b-4ec6-bac9-6bba23f37fdb name=/runtime.v1.ImageService/PullImage
	Dec 04 23:19:30 addons-630093 crio[1031]: time="2024-12-04 23:19:30.150717007Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	Dec 04 23:19:43 addons-630093 crio[1031]: time="2024-12-04 23:19:43.811712576Z" level=info msg="Checking image status: docker.io/nginx:latest" id=02901228-1e80-45ed-98f8-3587e805e02e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:19:43 addons-630093 crio[1031]: time="2024-12-04 23:19:43.811951884Z" level=info msg="Image docker.io/nginx:latest not found" id=02901228-1e80-45ed-98f8-3587e805e02e name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	a92f917845840       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                                          5 minutes ago       Running             busybox                                  0                   9101d3097d84d       busybox
	19a975e308aa0       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b                             6 minutes ago       Running             controller                               0                   f7e4db205d4a2       ingress-nginx-controller-5f85ff4588-bjrmz
	153039955b8e9       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          6 minutes ago       Running             csi-snapshotter                          0                   75bf3104e4902       csi-hostpathplugin-97jlr
	86a86137e5e1a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          6 minutes ago       Running             csi-provisioner                          0                   75bf3104e4902       csi-hostpathplugin-97jlr
	722cda2e61fdf       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            7 minutes ago       Running             liveness-probe                           0                   75bf3104e4902       csi-hostpathplugin-97jlr
	520228ead6e81       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           7 minutes ago       Running             hostpath                                 0                   75bf3104e4902       csi-hostpathplugin-97jlr
	904410f83eb89       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                7 minutes ago       Running             node-driver-registrar                    0                   75bf3104e4902       csi-hostpathplugin-97jlr
	d43b4e626d869       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              patch                                    0                   1453371ecba6e       ingress-nginx-admission-patch-6klmq
	9cfd8f1d1fc9d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f                   7 minutes ago       Exited              create                                   0                   6a2e4839790d0       ingress-nginx-admission-create-g9mgr
	c0b9ea5a54fce       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             7 minutes ago       Running             local-path-provisioner                   0                   2a9f5fb1eead6       local-path-provisioner-86d989889c-zjwsn
	3c19424241254       gcr.io/cloud-spanner-emulator/emulator@sha256:11b3615343c74d3c4ef7c4668305a87e2cab287dcab89fe2570e8d4d91938927                               7 minutes ago       Running             cloud-spanner-emulator                   0                   7e0131b1c64fc       cloud-spanner-emulator-dc5db94f4-qb868
	31862be06ca2f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   7 minutes ago       Running             csi-external-health-monitor-controller   0                   75bf3104e4902       csi-hostpathplugin-97jlr
	c3bf77a4a88bb       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   6be372042ec01       snapshot-controller-56fcc65765-xtclh
	4bde5393ab673       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a                        7 minutes ago       Running             metrics-server                           0                   483727d0ea1ad       metrics-server-84c5f94fbc-vtkhx
	ad2a02af7805b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      7 minutes ago       Running             volume-snapshot-controller               0                   ed2dd407b0f06       snapshot-controller-56fcc65765-2492d
	34d29b45443cc       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab                             7 minutes ago       Running             minikube-ingress-dns                     0                   fe05a9e0f9e54       kube-ingress-dns-minikube
	facaa7e1e233d       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             7 minutes ago       Running             csi-attacher                             0                   5c82f2a4a9fdc       csi-hostpath-attacher-0
	86ba1534808a8       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              7 minutes ago       Running             csi-resizer                              0                   0e397ea764d0c       csi-hostpath-resizer-0
	1c628d0404971       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                                             7 minutes ago       Running             coredns                                  0                   e5a18048ffd94       coredns-7c65d6cfc9-nvslc
	7579ef8738441       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             7 minutes ago       Running             storage-provisioner                      0                   53117b6914cba       storage-provisioner
	f0e1e1197d418       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16                                           7 minutes ago       Running             kindnet-cni                              0                   8e1077c9b19f2       kindnet-sklhp
	76b8a8033f246       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                                             8 minutes ago       Running             kube-proxy                               0                   7b72d950d834d       kube-proxy-k9l4p
	f25ca8d234e67       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                                             8 minutes ago       Running             kube-scheduler                           0                   6ecfaa8cbb0a8       kube-scheduler-addons-630093
	697a8666b9beb       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                                             8 minutes ago       Running             kube-apiserver                           0                   c5cc52570c5da       kube-apiserver-addons-630093
	249b17c70ce14       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                                             8 minutes ago       Running             etcd                                     0                   5c544b67b37e6       etcd-addons-630093
	c18ad7ba7d7db       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                                             8 minutes ago       Running             kube-controller-manager                  0                   2b2d046f58c6b       kube-controller-manager-addons-630093
	
	
	==> coredns [1c628d0404971ffcf0db6582f2878074f315e2807be4a331035c9159f5ab35b2] <==
	[INFO] 10.244.0.13:36200 - 58124 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000101425s
	[INFO] 10.244.0.13:43691 - 63611 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005338233s
	[INFO] 10.244.0.13:43691 - 63271 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,rd,ra 94 0.005381209s
	[INFO] 10.244.0.13:44344 - 26272 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.005410445s
	[INFO] 10.244.0.13:44344 - 26005 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.006018948s
	[INFO] 10.244.0.13:60838 - 12332 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.005880377s
	[INFO] 10.244.0.13:60838 - 12579 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.006174676s
	[INFO] 10.244.0.13:53538 - 12345 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091701s
	[INFO] 10.244.0.13:53538 - 12144 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000126528s
	[INFO] 10.244.0.21:59547 - 34898 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000213243s
	[INFO] 10.244.0.21:42413 - 63992 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000314574s
	[INFO] 10.244.0.21:50534 - 50228 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001818s
	[INFO] 10.244.0.21:44438 - 35236 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136337s
	[INFO] 10.244.0.21:49334 - 10258 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000138449s
	[INFO] 10.244.0.21:53611 - 11525 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00012321s
	[INFO] 10.244.0.21:33638 - 34118 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007323199s
	[INFO] 10.244.0.21:43427 - 30051 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 78 0.007940861s
	[INFO] 10.244.0.21:43377 - 12238 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.008381865s
	[INFO] 10.244.0.21:40602 - 12057 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.009350731s
	[INFO] 10.244.0.21:47148 - 45016 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007185414s
	[INFO] 10.244.0.21:42834 - 25970 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.007493941s
	[INFO] 10.244.0.21:44226 - 13563 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001030468s
	[INFO] 10.244.0.21:36544 - 7675 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001087253s
	[INFO] 10.244.0.25:33322 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000238152s
	[INFO] 10.244.0.25:43627 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014501s
	
	
	==> describe nodes <==
	Name:               addons-630093
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-630093
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=addons-630093
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_11_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-630093
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-630093"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:11:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-630093
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:19:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:19:41 +0000   Wed, 04 Dec 2024 23:11:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-630093
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 8258e1e2133c40cebfa95f57ba32eee3
	  System UUID:                bf67fca3-467d-49b0-b09d-7f56669f6671
	  Boot ID:                    ac1c7763-4d61-4ba9-8c16-bcbc5ed122b3
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  default                     cloud-spanner-emulator-dc5db94f4-qb868                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  default                     nginx                                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m20s
	  default                     task-pv-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m23s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-bjrmz                     100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         8m2s
	  kube-system                 coredns-7c65d6cfc9-nvslc                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m8s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 csi-hostpathplugin-97jlr                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m49s
	  kube-system                 etcd-addons-630093                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m14s
	  kube-system                 kindnet-sklhp                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      8m8s
	  kube-system                 kube-apiserver-addons-630093                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-controller-manager-addons-630093                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  kube-system                 kube-proxy-k9l4p                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m8s
	  kube-system                 kube-scheduler-addons-630093                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m14s
	  kube-system                 metrics-server-84c5f94fbc-vtkhx                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         8m4s
	  kube-system                 snapshot-controller-56fcc65765-2492d                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 snapshot-controller-56fcc65765-xtclh                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m1s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m4s
	  local-path-storage          helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  local-path-storage          local-path-provisioner-86d989889c-zjwsn                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m3s                   kube-proxy       
	  Normal   Starting                 8m19s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m19s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m19s (x8 over 8m19s)  kubelet          Node addons-630093 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m19s (x8 over 8m19s)  kubelet          Node addons-630093 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m19s (x7 over 8m19s)  kubelet          Node addons-630093 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m14s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m14s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m13s                  kubelet          Node addons-630093 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m13s                  kubelet          Node addons-630093 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m13s                  kubelet          Node addons-630093 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m9s                   node-controller  Node addons-630093 event: Registered Node addons-630093 in Controller
	  Normal   NodeReady                7m49s                  kubelet          Node addons-630093 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[Dec 4 22:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 d8 34 c4 9e fd 08 06
	[  +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[ +35.699001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[Dec 4 22:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 3d b0 9a 20 99 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[  +1.225322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000021] ll header: 00000000: ff ff ff ff ff ff b2 70 f6 e4 04 7e 08 06
	[  +0.023795] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	[  +8.010933] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +18.260065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e b7 56 b9 28 5b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +24.579912] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ca b1 23 b4 91 08 06
	[  +0.000531] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	
	
	==> etcd [249b17c70ce144d885b01fd08d03c4a75ba441e200b8fbfea6a1752fb404d6b1] <==
	{"level":"info","ts":"2024-12-04T23:11:40.217773Z","caller":"traceutil/trace.go:171","msg":"trace[1405136476] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-addons-630093; range_end:; response_count:1; response_revision:392; }","duration":"108.112329ms","start":"2024-12-04T23:11:40.109647Z","end":"2024-12-04T23:11:40.217759Z","steps":["trace[1405136476] 'agreement among raft nodes before linearized reading'  (duration: 103.402111ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.605094Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.675544ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2024-12-04T23:11:40.605257Z","caller":"traceutil/trace.go:171","msg":"trace[803689926] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:398; }","duration":"198.852168ms","start":"2024-12-04T23:11:40.406387Z","end":"2024-12-04T23:11:40.605239Z","steps":["trace[803689926] 'range keys from in-memory index tree'  (duration: 194.382666ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.708502Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.336878ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128033691115604618 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" value_size:3622 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-12-04T23:11:40.895257Z","caller":"traceutil/trace.go:171","msg":"trace[1109807764] transaction","detail":"{read_only:false; response_revision:401; number_of_response:1; }","duration":"279.117548ms","start":"2024-12-04T23:11:40.616120Z","end":"2024-12-04T23:11:40.895238Z","steps":["trace[1109807764] 'process raft request'  (duration: 279.078288ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:40.895484Z","caller":"traceutil/trace.go:171","msg":"trace[215470366] transaction","detail":"{read_only:false; response_revision:399; number_of_response:1; }","duration":"387.51899ms","start":"2024-12-04T23:11:40.507954Z","end":"2024-12-04T23:11:40.895473Z","steps":["trace[215470366] 'process raft request'  (duration: 96.858883ms)","trace[215470366] 'compare'  (duration: 103.229726ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T23:11:40.895555Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-04T23:11:40.507931Z","time spent":"387.575868ms","remote":"127.0.0.1:59108","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3684,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" mod_revision:0 > success:<request_put:<key:\"/registry/daemonsets/kube-system/amd-gpu-device-plugin\" value_size:3622 >> failure:<>"}
	{"level":"info","ts":"2024-12-04T23:11:40.895855Z","caller":"traceutil/trace.go:171","msg":"trace[2076159084] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"288.040682ms","start":"2024-12-04T23:11:40.607803Z","end":"2024-12-04T23:11:40.895844Z","steps":["trace[2076159084] 'process raft request'  (duration: 287.297204ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:40.895959Z","caller":"traceutil/trace.go:171","msg":"trace[705242873] linearizableReadLoop","detail":"{readStateIndex:410; appliedIndex:408; }","duration":"280.349916ms","start":"2024-12-04T23:11:40.615601Z","end":"2024-12-04T23:11:40.895951Z","steps":["trace[705242873] 'read index received'  (duration: 83.684619ms)","trace[705242873] 'applied index is now lower than readState.Index'  (duration: 196.664648ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-04T23:11:40.896113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.608929ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-630093\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-12-04T23:11:40.896138Z","caller":"traceutil/trace.go:171","msg":"trace[1318972100] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-630093; range_end:; response_count:1; response_revision:401; }","duration":"280.640123ms","start":"2024-12-04T23:11:40.615490Z","end":"2024-12-04T23:11:40.896130Z","steps":["trace[1318972100] 'agreement among raft nodes before linearized reading'  (duration: 280.572794ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:40.896264Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.36641ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:40.896282Z","caller":"traceutil/trace.go:171","msg":"trace[697950005] range","detail":"{range_begin:/registry/storageclasses; range_end:; response_count:0; response_revision:401; }","duration":"280.385448ms","start":"2024-12-04T23:11:40.615891Z","end":"2024-12-04T23:11:40.896276Z","steps":["trace[697950005] 'agreement among raft nodes before linearized reading'  (duration: 280.354047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:41.603321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.477454ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/default/cloud-spanner-emulator\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:41.603924Z","caller":"traceutil/trace.go:171","msg":"trace[1769666947] range","detail":"{range_begin:/registry/deployments/default/cloud-spanner-emulator; range_end:; response_count:0; response_revision:419; }","duration":"106.090798ms","start":"2024-12-04T23:11:41.497809Z","end":"2024-12-04T23:11:41.603899Z","steps":["trace[1769666947] 'agreement among raft nodes before linearized reading'  (duration: 105.439451ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:41.603524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.607937ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-addons-630093\" ","response":"range_response_count:1 size:7253"}
	{"level":"info","ts":"2024-12-04T23:11:41.604378Z","caller":"traceutil/trace.go:171","msg":"trace[1429916583] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-addons-630093; range_end:; response_count:1; response_revision:419; }","duration":"101.463597ms","start":"2024-12-04T23:11:41.502900Z","end":"2024-12-04T23:11:41.604364Z","steps":["trace[1429916583] 'agreement among raft nodes before linearized reading'  (duration: 100.553991ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:42.012812Z","caller":"traceutil/trace.go:171","msg":"trace[1073586070] transaction","detail":"{read_only:false; response_revision:435; number_of_response:1; }","duration":"101.602813ms","start":"2024-12-04T23:11:41.911189Z","end":"2024-12-04T23:11:42.012792Z","steps":["trace[1073586070] 'process raft request'  (duration: 87.210063ms)","trace[1073586070] 'compare'  (duration: 13.942562ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-04T23:11:42.012996Z","caller":"traceutil/trace.go:171","msg":"trace[73910532] transaction","detail":"{read_only:false; response_revision:436; number_of_response:1; }","duration":"101.658352ms","start":"2024-12-04T23:11:41.911329Z","end":"2024-12-04T23:11:42.012987Z","steps":["trace[73910532] 'process raft request'  (duration: 101.143669ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:42.013256Z","caller":"traceutil/trace.go:171","msg":"trace[1994636355] transaction","detail":"{read_only:false; response_revision:437; number_of_response:1; }","duration":"101.69878ms","start":"2024-12-04T23:11:41.911547Z","end":"2024-12-04T23:11:42.013245Z","steps":["trace[1994636355] 'process raft request'  (duration: 100.967611ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:42.096651Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.399561ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:42.096715Z","caller":"traceutil/trace.go:171","msg":"trace[1209668564] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:440; }","duration":"178.473778ms","start":"2024-12-04T23:11:41.918228Z","end":"2024-12-04T23:11:42.096702Z","steps":["trace[1209668564] 'agreement among raft nodes before linearized reading'  (duration: 178.384048ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-04T23:11:42.097064Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.915985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-ingress-dns-minikube\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-04T23:11:42.099886Z","caller":"traceutil/trace.go:171","msg":"trace[231438469] range","detail":"{range_begin:/registry/pods/kube-system/kube-ingress-dns-minikube; range_end:; response_count:0; response_revision:440; }","duration":"181.736324ms","start":"2024-12-04T23:11:41.918132Z","end":"2024-12-04T23:11:42.099868Z","steps":["trace[231438469] 'agreement among raft nodes before linearized reading'  (duration: 178.596552ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-04T23:11:44.318424Z","caller":"traceutil/trace.go:171","msg":"trace[299548537] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"105.793664ms","start":"2024-12-04T23:11:44.212613Z","end":"2024-12-04T23:11:44.318407Z","steps":["trace[299548537] 'process raft request'  (duration: 103.084576ms)"],"step_count":1}
	
	
	==> kernel <==
	 23:19:46 up  2:02,  0 users,  load average: 0.23, 0.49, 0.80
	Linux addons-630093 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f0e1e1197d418a53fccb71ca5e416f4c418c94bb11c8ffe71a914ba0f816aeac] <==
	I1204 23:17:37.402767       1 main.go:301] handling current node
	I1204 23:17:47.396534       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:17:47.396593       1 main.go:301] handling current node
	I1204 23:17:57.398730       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:17:57.398782       1 main.go:301] handling current node
	I1204 23:18:07.395802       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:07.395843       1 main.go:301] handling current node
	I1204 23:18:17.398735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:17.398775       1 main.go:301] handling current node
	I1204 23:18:27.395698       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:27.395786       1 main.go:301] handling current node
	I1204 23:18:37.402744       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:37.402787       1 main.go:301] handling current node
	I1204 23:18:47.396592       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:47.396635       1 main.go:301] handling current node
	I1204 23:18:57.395818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:18:57.395863       1 main.go:301] handling current node
	I1204 23:19:07.397501       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:07.397546       1 main.go:301] handling current node
	I1204 23:19:17.398712       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:17.398746       1 main.go:301] handling current node
	I1204 23:19:27.398720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:27.398771       1 main.go:301] handling current node
	I1204 23:19:37.402734       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:19:37.402778       1 main.go:301] handling current node
	
	
	==> kube-apiserver [697a8666b9beb3ce1d03c942590f6bd6818dd188d6ce6114000d4cd0f86eb24f] <==
	E1204 23:11:57.667972       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.135.57:443: connect: connection refused" logger="UnhandledError"
	W1204 23:12:44.501182       1 handler_proxy.go:99] no RequestInfo found in the context
	W1204 23:12:44.501182       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 23:12:44.501270       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1204 23:12:44.501295       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1204 23:12:44.502403       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1204 23:12:44.502426       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1204 23:13:18.020994       1 handler_proxy.go:99] no RequestInfo found in the context
	E1204 23:13:18.021061       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.81.204:443: connect: connection refused" logger="UnhandledError"
	E1204 23:13:18.021072       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1204 23:13:18.022591       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.81.204:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.81.204:443: connect: connection refused" logger="UnhandledError"
	I1204 23:13:18.053200       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1204 23:13:59.747428       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54842: use of closed network connection
	E1204 23:13:59.921107       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54876: use of closed network connection
	I1204 23:14:08.946781       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.96.65.33"}
	I1204 23:14:25.954565       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1204 23:14:26.167940       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.235.196"}
	I1204 23:14:28.188596       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1204 23:14:29.205715       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	
	
	==> kube-controller-manager [c18ad7ba7d7db0830f098b28bebb532246d393507131f12d889ee2f3dd1f0cec] <==
	I1204 23:14:37.558815       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 23:14:37.974866       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1204 23:14:37.974913       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 23:14:38.438202       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="gadget"
	W1204 23:14:39.494688       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:14:39.494738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1204 23:14:39.957349       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="12.035µs"
	I1204 23:14:50.067311       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W1204 23:14:51.659881       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:14:51.659934       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:15:15.331968       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:15:15.332023       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:15:41.664844       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:15:41.664897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:16:29.575804       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:16:29.575854       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:17:02.559821       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:17:02.559870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:17:45.806997       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:17:45.807050       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:18:26.298216       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:18:26.298264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1204 23:19:04.552124       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1204 23:19:04.552173       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1204 23:19:41.406992       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-630093"
	
	
	==> kube-proxy [76b8a8033f246a695f01ca1eec1c0ba32b678a44438d9c4943a3e8ec8aff2c9d] <==
	I1204 23:11:41.999798       1 server_linux.go:66] "Using iptables proxy"
	I1204 23:11:42.522412       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1204 23:11:42.522510       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:11:42.915799       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1204 23:11:42.916905       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:11:42.999168       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:11:42.999868       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:11:42.999987       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:11:43.001630       1 config.go:199] "Starting service config controller"
	I1204 23:11:43.002952       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:11:43.002663       1 config.go:328] "Starting node config controller"
	I1204 23:11:43.003244       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:11:43.002141       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:11:43.003442       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:11:43.105483       1 shared_informer.go:320] Caches are synced for node config
	I1204 23:11:43.105660       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:11:43.105772       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f25ca8d234e6719b0b4c37293e5281f4e8e468b9b3a25895393e51a21a648acc] <==
	W1204 23:11:30.518306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1204 23:11:30.518308       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1204 23:11:30.518319       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E1204 23:11:30.518324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:30.518387       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:11:30.518406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.464973       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1204 23:11:31.465022       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1204 23:11:31.504488       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.504541       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.546483       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.546559       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.565052       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1204 23:11:31.565112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.572602       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1204 23:11:31.572647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.606116       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1204 23:11:31.606166       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.628789       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1204 23:11:31.628843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.663323       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1204 23:11:31.663367       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1204 23:11:31.685908       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1204 23:11:31.685980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1204 23:11:33.616392       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 04 23:18:53 addons-630093 kubelet[1643]: E1204 23:18:53.000707    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354333000411906,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:18:53 addons-630093 kubelet[1643]: E1204 23:18:53.811304    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="033304b8-dc25-498d-9212-9e1e40bc9c12"
	Dec 04 23:18:59 addons-630093 kubelet[1643]: E1204 23:18:59.530352    1643 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 04 23:18:59 addons-630093 kubelet[1643]: E1204 23:18:59.530432    1643 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Dec 04 23:18:59 addons-630093 kubelet[1643]: E1204 23:18:59.530694    1643 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:helper-pod,Image:docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79,Command:[/bin/sh /script/setup],Args:[-p /opt/local-path-provisioner/pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d_default_test-pvc -s 67108864 -m Filesystem],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:VOL_DIR,Value:/opt/local-path-provisioner/pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d_default_test-pvc,ValueFrom:nil,},EnvVar{Name:VOL_MODE,Value:Filesystem,ValueFrom:nil,},EnvVar{Name:VOL_SIZE_BYTES,Value:67108864,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:script,ReadOnly:false,MountPath:/script,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:data,ReadOnly:false,MountPath:/
opt/local-path-provisioner/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qtvmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d_local-path-storage(64785593-c5b1-4a4b-839f-c12c766ae92f): ErrImagePull: reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.c
om/increase-rate-limit" logger="UnhandledError"
	Dec 04 23:18:59 addons-630093 kubelet[1643]: E1204 23:18:59.531999    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ErrImagePull: \"reading manifest sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 in docker.io/library/busybox: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="local-path-storage/helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d" podUID="64785593-c5b1-4a4b-839f-c12c766ae92f"
	Dec 04 23:18:59 addons-630093 kubelet[1643]: E1204 23:18:59.647585    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"helper-pod\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\\\"\"" pod="local-path-storage/helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d" podUID="64785593-c5b1-4a4b-839f-c12c766ae92f"
	Dec 04 23:19:03 addons-630093 kubelet[1643]: E1204 23:19:03.002610    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354343002281424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:03 addons-630093 kubelet[1643]: E1204 23:19:03.002673    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354343002281424,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:08 addons-630093 kubelet[1643]: E1204 23:19:08.811767    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx" podUID="033304b8-dc25-498d-9212-9e1e40bc9c12"
	Dec 04 23:19:10 addons-630093 kubelet[1643]: I1204 23:19:10.810671    1643 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-dc5db94f4-qb868" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 23:19:13 addons-630093 kubelet[1643]: E1204 23:19:13.005166    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354353004887025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:13 addons-630093 kubelet[1643]: E1204 23:19:13.005204    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354353004887025,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:22 addons-630093 kubelet[1643]: I1204 23:19:22.811633    1643 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 04 23:19:23 addons-630093 kubelet[1643]: E1204 23:19:23.008100    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354363007832299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:23 addons-630093 kubelet[1643]: E1204 23:19:23.008133    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354363007832299,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:30 addons-630093 kubelet[1643]: E1204 23:19:30.145939    1643 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 04 23:19:30 addons-630093 kubelet[1643]: E1204 23:19:30.146017    1643 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 04 23:19:30 addons-630093 kubelet[1643]: E1204 23:19:30.146393    1643 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:task-pv-container,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-server,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:task-pv-storage,ReadOnly:false,MountPath:/usr/share/nginx/html,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bbll2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationM
essagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod task-pv-pod_default(7d7d08b6-0c55-4e1e-af14-bcf120b4fe87): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 04 23:19:30 addons-630093 kubelet[1643]: E1204 23:19:30.147619    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	Dec 04 23:19:33 addons-630093 kubelet[1643]: E1204 23:19:33.010209    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354373009929666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:33 addons-630093 kubelet[1643]: E1204 23:19:33.010265    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354373009929666,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:43 addons-630093 kubelet[1643]: E1204 23:19:43.012198    1643 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354383011899100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:43 addons-630093 kubelet[1643]: E1204 23:19:43.012231    1643 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354383011899100,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:527109,},InodesUsed:&UInt64Value{Value:212,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:19:43 addons-630093 kubelet[1643]: E1204 23:19:43.812243    1643 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"task-pv-container\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/task-pv-pod" podUID="7d7d08b6-0c55-4e1e-af14-bcf120b4fe87"
	
	
	==> storage-provisioner [7579ef87384414e56ddfe0b7d9482bd87f3030a02185f51552230baf2942b017] <==
	I1204 23:11:58.350091       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:11:58.357669       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:11:58.357713       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 23:11:58.365574       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 23:11:58.365696       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7e65eeda-0a1f-4ed0-93d5-7510680ef7a9", APIVersion:"v1", ResourceVersion:"914", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476 became leader
	I1204 23:11:58.365747       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476!
	I1204 23:11:58.466731       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-630093_4fbeb0c1-dfd3-440b-90ad-a51f627c5476!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-630093 -n addons-630093
helpers_test.go:261: (dbg) Run:  kubectl --context addons-630093 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/LocalPath]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d: exit status 1 (99.217729ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-630093/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:14:26 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-49bg2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-49bg2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m21s                default-scheduler  Successfully assigned default/nginx to addons-630093
	  Warning  Failed     79s (x3 over 4m22s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     79s (x3 over 4m22s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    39s (x5 over 4m22s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     39s (x5 over 4m22s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    28s (x4 over 5m21s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             task-pv-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-630093/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:14:23 +0000
	Labels:           app=task-pv-pod
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.26
	IPs:
	  IP:  10.244.0.26
	Containers:
	  task-pv-container:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /usr/share/nginx/html from task-pv-storage (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bbll2 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  task-pv-storage:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  hpvc
	    ReadOnly:   false
	  kube-api-access-bbll2:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m24s                default-scheduler  Successfully assigned default/task-pv-pod to addons-630093
	  Warning  Failed     4m53s                kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    67s (x4 over 5m24s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     17s (x4 over 4m53s)  kubelet            Error: ErrImagePull
	  Warning  Failed     17s (x3 over 3m21s)  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x5 over 4m52s)   kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4s (x5 over 4m52s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:  <none>
	    Mounts:
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jd9np (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-jd9np:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:                      <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g9mgr" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6klmq" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-630093 describe pod nginx task-pv-pod test-local-path ingress-nginx-admission-create-g9mgr ingress-nginx-admission-patch-6klmq helper-pod-create-pvc-6694fa78-6bb2-4438-95f7-35ce09d8863d: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (32.102013802s)
--- FAIL: TestAddons/parallel/LocalPath (334.48s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (189.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [e0001d45-4121-49ef-b7cd-7063333fcc8b] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004337767s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-217112 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-217112 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-217112 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-217112 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6301f750-dc04-406b-ab42-9c3fd9a1112e] Pending
helpers_test.go:344: "sp-pod" [6301f750-dc04-406b-ab42-9c3fd9a1112e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217112 -n functional-217112
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-12-04 23:28:34.364435488 +0000 UTC m=+1074.230420589
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-217112 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-217112 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-217112/192.168.49.2
Start Time:       Wed, 04 Dec 2024 23:25:34 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:  10.244.0.8
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ErrImagePull
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hblcf (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-hblcf:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                 From               Message
----     ------     ----                ----               -------
Normal   Scheduled  3m                  default-scheduler  Successfully assigned default/sp-pod to functional-217112
Warning  Failed     113s                kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   Pulling    100s (x2 over 3m)   kubelet            Pulling image "docker.io/nginx"
Warning  Failed     15s (x2 over 113s)  kubelet            Error: ErrImagePull
Warning  Failed     15s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    2s (x2 over 113s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     2s (x2 over 113s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-217112 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-217112 logs sp-pod -n default: exit status 1 (67.361846ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: image can't be pulled

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-217112 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-217112
helpers_test.go:235: (dbg) docker inspect functional-217112:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d",
	        "Created": "2024-12-04T23:23:35.251060553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 417652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-04T23:23:35.361950648Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/hostname",
	        "HostsPath": "/var/lib/docker/containers/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/hosts",
	        "LogPath": "/var/lib/docker/containers/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d-json.log",
	        "Name": "/functional-217112",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-217112:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-217112",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/53517a903faf4426a108e32aacf93a7831214da42accee7845942d37143d3e38-init/diff:/var/lib/docker/overlay2/e1057f3484b1ab78c06169089ecae0d5a5ffb4d6954d3cd93f0938b7adf18020/diff",
	                "MergedDir": "/var/lib/docker/overlay2/53517a903faf4426a108e32aacf93a7831214da42accee7845942d37143d3e38/merged",
	                "UpperDir": "/var/lib/docker/overlay2/53517a903faf4426a108e32aacf93a7831214da42accee7845942d37143d3e38/diff",
	                "WorkDir": "/var/lib/docker/overlay2/53517a903faf4426a108e32aacf93a7831214da42accee7845942d37143d3e38/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-217112",
	                "Source": "/var/lib/docker/volumes/functional-217112/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-217112",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-217112",
	                "name.minikube.sigs.k8s.io": "functional-217112",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "caf95b5fe10d8115471aa1948f34dfdf0fc7cfca2d1234dd7d465142b2a850ce",
	            "SandboxKey": "/var/run/docker/netns/caf95b5fe10d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-217112": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e6fb0d9aa5dde7a6d493a7cdef55ddb4085e0abeaf7ed2eb640ed29f590a10b5",
	                    "EndpointID": "17a326823eedbd53b0fc9c72a3d26fde7dc11d0c17915848a9b3190c80c38268",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-217112",
	                        "e66042d8eed4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-217112 -n functional-217112
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-217112 logs -n 25: (1.444673205s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-217112 image ls                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	| image          | functional-217112 image load --daemon                                      | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | kicbase/echo-server:functional-217112                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112 image ls                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	| image          | functional-217112 image save kicbase/echo-server:functional-217112         | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112 image rm                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | kicbase/echo-server:functional-217112                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112 image ls                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	| image          | functional-217112 image load                                               | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/ssl/certs/387894.pem                                                  |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /usr/share/ca-certificates/387894.pem                                      |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/ssl/certs/51391683.0                                                  |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/ssl/certs/3878942.pem                                                 |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /usr/share/ca-certificates/3878942.pem                                     |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                  |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/test/nested/copy/387894/hosts                                         |                   |         |         |                     |                     |
	| service        | functional-217112 service                                                  | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | hello-node-connect --url                                                   |                   |         |         |                     |                     |
	| image          | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh pgrep                                                | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-217112 image build -t                                           | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | localhost/my-image:functional-217112                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-217112 image ls                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	| update-context | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:25:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:25:31.172229  428722 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:25:31.172330  428722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:31.172335  428722 out.go:358] Setting ErrFile to fd 2...
	I1204 23:25:31.172346  428722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:31.172524  428722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:25:31.173153  428722 out.go:352] Setting JSON to false
	I1204 23:25:31.174455  428722 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7680,"bootTime":1733347051,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:25:31.174522  428722 start.go:139] virtualization: kvm guest
	I1204 23:25:31.176568  428722 out.go:177] * [functional-217112] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:25:31.178786  428722 notify.go:220] Checking for updates...
	I1204 23:25:31.178798  428722 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:25:31.180671  428722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:25:31.182599  428722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:25:31.184088  428722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:25:31.185573  428722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:25:31.187161  428722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:25:31.188951  428722 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:25:31.189454  428722 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:25:31.212463  428722 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:25:31.212628  428722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:25:31.266282  428722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-04 23:25:31.255749052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:25:31.266405  428722 docker.go:318] overlay module found
	I1204 23:25:31.268198  428722 out.go:177] * Using the docker driver based on existing profile
	I1204 23:25:31.269602  428722 start.go:297] selected driver: docker
	I1204 23:25:31.269625  428722 start.go:901] validating driver "docker" against &{Name:functional-217112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-217112 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:25:31.269760  428722 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:25:31.269875  428722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:25:31.328533  428722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-04 23:25:31.318399263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:25:31.329517  428722 cni.go:84] Creating CNI manager for ""
	I1204 23:25:31.329597  428722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:25:31.329714  428722 start.go:340] cluster config:
	{Name:functional-217112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-217112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:25:31.332139  428722 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 04 23:25:57 functional-217112 crio[4905]: time="2024-12-04 23:25:57.506690173Z" level=info msg="Removing pod sandbox: 93a8b402371587eed998a068d9967edddf0d47df6353f0b5ac8cc355b0a3965e" id=531ff0ae-3984-4e93-b0ef-dd85187e30b4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 04 23:25:57 functional-217112 crio[4905]: time="2024-12-04 23:25:57.512244227Z" level=info msg="Removed pod sandbox: 93a8b402371587eed998a068d9967edddf0d47df6353f0b5ac8cc355b0a3965e" id=531ff0ae-3984-4e93-b0ef-dd85187e30b4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 04 23:26:10 functional-217112 crio[4905]: time="2024-12-04 23:26:10.726575951Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 04 23:26:41 functional-217112 crio[4905]: time="2024-12-04 23:26:41.345479283Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=0831ecd1-97d8-4831-b71f-48daafd6de56 name=/runtime.v1.ImageService/PullImage
	Dec 04 23:26:41 functional-217112 crio[4905]: time="2024-12-04 23:26:41.362131526Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 04 23:26:41 functional-217112 crio[4905]: time="2024-12-04 23:26:41.880054329Z" level=info msg="Checking image status: docker.io/nginx:latest" id=7ac5a719-d4ff-4b3d-a22b-738130403478 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:26:41 functional-217112 crio[4905]: time="2024-12-04 23:26:41.880327366Z" level=info msg="Image docker.io/nginx:latest not found" id=7ac5a719-d4ff-4b3d-a22b-738130403478 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:26:54 functional-217112 crio[4905]: time="2024-12-04 23:26:54.512377562Z" level=info msg="Checking image status: docker.io/nginx:latest" id=fa32a22c-67d5-4deb-b43f-ee9041c35def name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:26:54 functional-217112 crio[4905]: time="2024-12-04 23:26:54.512606584Z" level=info msg="Image docker.io/nginx:latest not found" id=fa32a22c-67d5-4deb-b43f-ee9041c35def name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:27:11 functional-217112 crio[4905]: time="2024-12-04 23:27:11.986424635Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=9b7bd085-dbe3-4086-b821-2446aeaf28dd name=/runtime.v1.ImageService/PullImage
	Dec 04 23:27:11 functional-217112 crio[4905]: time="2024-12-04 23:27:11.987768445Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Dec 04 23:27:12 functional-217112 crio[4905]: time="2024-12-04 23:27:12.946241618Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=09c5bde3-e775-48f6-9395-5d37bee0199d name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:27:12 functional-217112 crio[4905]: time="2024-12-04 23:27:12.946493629Z" level=info msg="Image docker.io/nginx:alpine not found" id=09c5bde3-e775-48f6-9395-5d37bee0199d name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:27:26 functional-217112 crio[4905]: time="2024-12-04 23:27:26.512987822Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=35bd7e23-6390-4b1a-90a0-eb1f110d06f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:27:26 functional-217112 crio[4905]: time="2024-12-04 23:27:26.513254894Z" level=info msg="Image docker.io/nginx:alpine not found" id=35bd7e23-6390-4b1a-90a0-eb1f110d06f0 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:27:42 functional-217112 crio[4905]: time="2024-12-04 23:27:42.614246721Z" level=info msg="Pulling image: docker.io/nginx:latest" id=cbc3ff6e-5a93-4231-8c86-76e82969f952 name=/runtime.v1.ImageService/PullImage
	Dec 04 23:27:42 functional-217112 crio[4905]: time="2024-12-04 23:27:42.618555852Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	Dec 04 23:27:43 functional-217112 crio[4905]: time="2024-12-04 23:27:43.008987668Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=350c4e50-a095-4a46-869c-a7402f2debd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:27:43 functional-217112 crio[4905]: time="2024-12-04 23:27:43.009230587Z" level=info msg="Image docker.io/mysql:5.7 not found" id=350c4e50-a095-4a46-869c-a7402f2debd5 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:27:55 functional-217112 crio[4905]: time="2024-12-04 23:27:55.512894353Z" level=info msg="Checking image status: docker.io/mysql:5.7" id=f17a86f9-4340-4588-906b-1325e1df9ef0 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:27:55 functional-217112 crio[4905]: time="2024-12-04 23:27:55.513196902Z" level=info msg="Image docker.io/mysql:5.7 not found" id=f17a86f9-4340-4588-906b-1325e1df9ef0 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:28:19 functional-217112 crio[4905]: time="2024-12-04 23:28:19.553710027Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=b76731f7-aa11-4ffd-a13f-986a3ac6f045 name=/runtime.v1.ImageService/PullImage
	Dec 04 23:28:19 functional-217112 crio[4905]: time="2024-12-04 23:28:19.555109765Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	Dec 04 23:28:32 functional-217112 crio[4905]: time="2024-12-04 23:28:32.512993287Z" level=info msg="Checking image status: docker.io/nginx:latest" id=1ccc7d62-db8b-4157-9990-073d1dc5a8a6 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:28:32 functional-217112 crio[4905]: time="2024-12-04 23:28:32.513287203Z" level=info msg="Image docker.io/nginx:latest not found" id=1ccc7d62-db8b-4157-9990-073d1dc5a8a6 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	62c873dfde2b8       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 2 minutes ago       Running             echoserver                  0                   c9846a5cf296f       hello-node-connect-67bdd5bbb4-49tsm
	6779d83a3960a       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         2 minutes ago       Running             kubernetes-dashboard        0                   d0a3eb8ee5df2       kubernetes-dashboard-695b96c756-dxnz5
	56f5c1f7fd931       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   3 minutes ago       Running             dashboard-metrics-scraper   0                   3f416f29a75ca       dashboard-metrics-scraper-c5db448b4-drbkx
	107f928664708       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              3 minutes ago       Exited              mount-munger                0                   789035c7befed       busybox-mount
	fbb8647bcc7ae       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               3 minutes ago       Running             echoserver                  0                   23d1d0ff61450       hello-node-6b9f76b5c7-4sch9
	c646c17e71e96       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 3 minutes ago       Running             coredns                     2                   02006c023c3ff       coredns-7c65d6cfc9-q2jnj
	ed6bba6cae1cc       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5                                                 3 minutes ago       Running             kindnet-cni                 2                   a6508fd5d31da       kindnet-98kqg
	6d6b9e7b7ec7e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 3 minutes ago       Running             kube-proxy                  2                   53ea08a96d403       kube-proxy-9xwqd
	bf56b586908d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Running             storage-provisioner         3                   abc8620e8793f       storage-provisioner
	61355d6627de6       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                 3 minutes ago       Running             kube-apiserver              0                   3329fe5dd914e       kube-apiserver-functional-217112
	424d3ae63cb29       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 3 minutes ago       Running             kube-scheduler              2                   10be6f00b6c50       kube-scheduler-functional-217112
	7b8ef4e5121d7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 3 minutes ago       Running             etcd                        2                   fc8da3304d44c       etcd-functional-217112
	4d80a9740cde1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 3 minutes ago       Running             kube-controller-manager     2                   83c9bc62943f8       kube-controller-manager-functional-217112
	3d828f0588fd4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 3 minutes ago       Exited              storage-provisioner         2                   abc8620e8793f       storage-provisioner
	4de5da0b9a832       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 4 minutes ago       Exited              coredns                     1                   02006c023c3ff       coredns-7c65d6cfc9-q2jnj
	4fb493194047c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 4 minutes ago       Exited              etcd                        1                   fc8da3304d44c       etcd-functional-217112
	49679cf32ad2f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 4 minutes ago       Exited              kube-scheduler              1                   10be6f00b6c50       kube-scheduler-functional-217112
	83b54047e98f3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 4 minutes ago       Exited              kube-proxy                  1                   53ea08a96d403       kube-proxy-9xwqd
	92acb1f8b1040       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5                                                 4 minutes ago       Exited              kindnet-cni                 1                   a6508fd5d31da       kindnet-98kqg
	fbb41635f74ce       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 4 minutes ago       Exited              kube-controller-manager     1                   83c9bc62943f8       kube-controller-manager-functional-217112
	
	
	==> coredns [4de5da0b9a832955f898fa51bf79cfd3171ec1e166e50f9d27b4979b2c5730d9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45874 - 17269 "HINFO IN 2228789017388824328.7205854619883508623. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02962777s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c646c17e71e96120b48cb0d2d693b2af0f6811ebc97025d4497d220a31888ac3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54632 - 4020 "HINFO IN 4281344577431339135.2135958165029355891. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030517446s
	
	
	==> describe nodes <==
	Name:               functional-217112
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-217112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=functional-217112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_23_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:23:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-217112
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:28:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:26:01 +0000   Wed, 04 Dec 2024 23:23:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:26:01 +0000   Wed, 04 Dec 2024 23:23:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:26:01 +0000   Wed, 04 Dec 2024 23:23:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:26:01 +0000   Wed, 04 Dec 2024 23:24:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-217112
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ae6ec90eaad4a1dad26aed9d1c00186
	  System UUID:                c6c35e83-244c-4810-aafc-b6e500875507
	  Boot ID:                    ac1c7763-4d61-4ba9-8c16-bcbc5ed122b3
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-4sch9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     hello-node-connect-67bdd5bbb4-49tsm          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     mysql-6cdb49bbb-dhqbq                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     2m48s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m55s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  kube-system                 coredns-7c65d6cfc9-q2jnj                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m42s
	  kube-system                 etcd-functional-217112                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m47s
	  kube-system                 kindnet-98kqg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m42s
	  kube-system                 kube-apiserver-functional-217112             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m34s
	  kube-system                 kube-controller-manager-functional-217112    200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-proxy-9xwqd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-functional-217112             100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-drbkx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-dxnz5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m41s                  kube-proxy       
	  Normal   Starting                 3m33s                  kube-proxy       
	  Normal   Starting                 4m12s                  kube-proxy       
	  Warning  CgroupV1                 4m53s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m53s (x8 over 4m53s)  kubelet          Node functional-217112 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m53s (x8 over 4m53s)  kubelet          Node functional-217112 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m53s (x7 over 4m53s)  kubelet          Node functional-217112 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    4m47s                  kubelet          Node functional-217112 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 4m47s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m47s                  kubelet          Node functional-217112 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     4m47s                  kubelet          Node functional-217112 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m43s                  node-controller  Node functional-217112 event: Registered Node functional-217112 in Controller
	  Normal   NodeReady                4m29s                  kubelet          Node functional-217112 status is now: NodeReady
	  Normal   RegisteredNode           4m9s                   node-controller  Node functional-217112 event: Registered Node functional-217112 in Controller
	  Normal   NodeHasSufficientMemory  3m38s (x8 over 3m38s)  kubelet          Node functional-217112 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 3m38s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    3m38s (x8 over 3m38s)  kubelet          Node functional-217112 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m38s (x7 over 3m38s)  kubelet          Node functional-217112 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m31s                  node-controller  Node functional-217112 event: Registered Node functional-217112 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[Dec 4 22:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 d8 34 c4 9e fd 08 06
	[  +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[ +35.699001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[Dec 4 22:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 3d b0 9a 20 99 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[  +1.225322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000021] ll header: 00000000: ff ff ff ff ff ff b2 70 f6 e4 04 7e 08 06
	[  +0.023795] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	[  +8.010933] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +18.260065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e b7 56 b9 28 5b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +24.579912] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ca b1 23 b4 91 08 06
	[  +0.000531] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	
	
	==> etcd [4fb493194047c45aab296cb9ba1167a86b39fd7f9c063c655af863baf0b0be6b] <==
	{"level":"info","ts":"2024-12-04T23:24:21.402497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-12-04T23:24:21.402515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:21.402523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:21.402539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:21.402546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:21.403691Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-217112 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T23:24:21.403711Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:24:21.403707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:24:21.403955Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T23:24:21.403996Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T23:24:21.404698Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T23:24:21.404942Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T23:24:21.405650Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T23:24:21.406116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-04T23:24:48.272652Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-04T23:24:48.272714Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-217112","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-12-04T23:24:48.272808Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T23:24:48.272911Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/12/04 23:24:48 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-04T23:24:48.292726Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T23:24:48.292815Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-04T23:24:48.292921Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-12-04T23:24:48.296096Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-04T23:24:48.296215Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-04T23:24:48.296227Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-217112","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7b8ef4e5121d768ea4bfa13be8fca07d3e3f570404af4b5ba790e19e5690a3b2] <==
	{"level":"info","ts":"2024-12-04T23:24:58.336038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-12-04T23:24:58.336131Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-12-04T23:24:58.336262Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T23:24:58.336292Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T23:24:58.394890Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T23:24:58.394954Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-04T23:24:58.395060Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-04T23:24:58.395321Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T23:24:58.395418Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T23:24:59.925390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:59.925436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:59.925470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:59.925482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-12-04T23:24:59.925488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-04T23:24:59.925496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-12-04T23:24:59.925514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-04T23:24:59.926912Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:24:59.926919Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-217112 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T23:24:59.926931Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:24:59.927212Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T23:24:59.927233Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T23:24:59.927900Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T23:24:59.928096Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T23:24:59.928652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T23:24:59.928766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 23:28:35 up  2:11,  0 users,  load average: 0.28, 0.55, 0.70
	Linux functional-217112 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [92acb1f8b104047dfc858069772970d1f0ead06d0d08f5afd2c81d6940fda185] <==
	I1204 23:24:20.095079       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1204 23:24:20.095358       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1204 23:24:20.095554       1 main.go:148] setting mtu 1500 for CNI 
	I1204 23:24:20.095569       1 main.go:178] kindnetd IP family: "ipv4"
	I1204 23:24:20.095581       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1204 23:24:20.495222       1 controller.go:361] Starting controller kube-network-policies
	I1204 23:24:20.495341       1 controller.go:365] Waiting for informer caches to sync
	I1204 23:24:20.495371       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1204 23:24:22.796003       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1204 23:24:22.796143       1 metrics.go:61] Registering metrics
	I1204 23:24:22.796235       1 controller.go:401] Syncing nftables rules
	I1204 23:24:30.495855       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:24:30.495932       1 main.go:301] handling current node
	I1204 23:24:40.495866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:24:40.495925       1 main.go:301] handling current node
	
	
	==> kindnet [ed6bba6cae1cc6343508b7dce043f4e05e65de874abf02a0a8005cd507858b25] <==
	I1204 23:26:32.620362       1 main.go:301] handling current node
	I1204 23:26:42.620124       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:26:42.620178       1 main.go:301] handling current node
	I1204 23:26:52.626695       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:26:52.626728       1 main.go:301] handling current node
	I1204 23:27:02.619813       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:27:02.619863       1 main.go:301] handling current node
	I1204 23:27:12.626715       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:27:12.626770       1 main.go:301] handling current node
	I1204 23:27:22.628865       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:27:22.628913       1 main.go:301] handling current node
	I1204 23:27:32.625515       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:27:32.625565       1 main.go:301] handling current node
	I1204 23:27:42.626720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:27:42.626761       1 main.go:301] handling current node
	I1204 23:27:52.626728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:27:52.626769       1 main.go:301] handling current node
	I1204 23:28:02.619788       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:28:02.619831       1 main.go:301] handling current node
	I1204 23:28:12.626743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:28:12.626786       1 main.go:301] handling current node
	I1204 23:28:22.628851       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:28:22.628887       1 main.go:301] handling current node
	I1204 23:28:32.619841       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:28:32.619876       1 main.go:301] handling current node
	
	
	==> kube-apiserver [61355d6627de6fec9e702f2bb2c397cae966745c3514426c12bbf387432b6192] <==
	I1204 23:25:01.031589       1 aggregator.go:171] initial CRD sync complete...
	I1204 23:25:01.031604       1 autoregister_controller.go:144] Starting autoregister controller
	I1204 23:25:01.031610       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1204 23:25:01.031617       1 cache.go:39] Caches are synced for autoregister controller
	I1204 23:25:01.031471       1 shared_informer.go:320] Caches are synced for configmaps
	I1204 23:25:01.036407       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1204 23:25:01.055435       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 23:25:01.100196       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1204 23:25:01.939017       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1204 23:25:02.947100       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 23:25:03.059700       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 23:25:03.072977       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 23:25:03.151821       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 23:25:03.161537       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1204 23:25:04.503879       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:25:04.604257       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 23:25:24.007608       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.216.106"}
	I1204 23:25:27.785066       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1204 23:25:27.887496       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.228.160"}
	I1204 23:25:32.346799       1 controller.go:615] quota admission added evaluator for: namespaces
	I1204 23:25:32.529886       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.102.16"}
	I1204 23:25:32.543730       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.150.44"}
	I1204 23:25:40.632817       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.245.58"}
	I1204 23:25:40.843162       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.32.176"}
	I1204 23:25:47.825249       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.174.146"}
	
	
	==> kube-controller-manager [4d80a9740cde188ac8cb450bc33dd189481182c72cd06bb2eb7af8153cbc99a6] <==
	E1204 23:25:32.421291       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1204 23:25:32.436231       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="10.66839ms"
	I1204 23:25:32.495130       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="72.136706ms"
	I1204 23:25:32.505195       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.922455ms"
	I1204 23:25:32.505273       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="42.36µs"
	I1204 23:25:32.512128       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="75.839083ms"
	I1204 23:25:32.512239       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="68.461µs"
	I1204 23:25:32.520083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="87.008µs"
	I1204 23:25:35.744904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.964377ms"
	I1204 23:25:35.745035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="61.879µs"
	I1204 23:25:40.761989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.819708ms"
	I1204 23:25:40.762064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="45.968µs"
	I1204 23:25:40.773824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="11.61955ms"
	I1204 23:25:40.779685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="5.797911ms"
	I1204 23:25:40.779782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="52.173µs"
	I1204 23:25:40.781342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="52.049µs"
	I1204 23:25:41.760882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="6.017991ms"
	I1204 23:25:41.760967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="49.171µs"
	I1204 23:25:47.869794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="12.161467ms"
	I1204 23:25:47.874331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="4.48654ms"
	I1204 23:25:47.874435       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="66.088µs"
	I1204 23:25:47.878262       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="44.562µs"
	I1204 23:26:01.844240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-217112"
	I1204 23:27:43.019584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="590.176µs"
	I1204 23:27:55.521302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="65.556µs"
	
	
	==> kube-controller-manager [fbb41635f74ce0412b5842c66e32d2b0cf89a9bfeb2f358bcc5fe76e5fbd8b4f] <==
	I1204 23:24:26.152677       1 shared_informer.go:320] Caches are synced for taint
	I1204 23:24:26.152751       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1204 23:24:26.152832       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-217112"
	I1204 23:24:26.152878       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1204 23:24:26.153929       1 shared_informer.go:320] Caches are synced for ephemeral
	I1204 23:24:26.155435       1 shared_informer.go:320] Caches are synced for GC
	I1204 23:24:26.157389       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 23:24:26.161106       1 shared_informer.go:320] Caches are synced for persistent volume
	I1204 23:24:26.164640       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1204 23:24:26.172682       1 shared_informer.go:320] Caches are synced for stateful set
	I1204 23:24:26.202956       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1204 23:24:26.203003       1 shared_informer.go:320] Caches are synced for endpoint
	I1204 23:24:26.203101       1 shared_informer.go:320] Caches are synced for daemon sets
	I1204 23:24:26.203141       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1204 23:24:26.209633       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 23:24:26.209639       1 shared_informer.go:320] Caches are synced for attach detach
	I1204 23:24:26.261123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.900543ms"
	I1204 23:24:26.261298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="87.089µs"
	I1204 23:24:26.566520       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 23:24:26.569720       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 23:24:26.569746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1204 23:24:27.617063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.120001ms"
	I1204 23:24:27.617254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="142.493µs"
	I1204 23:24:29.248109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-217112"
	I1204 23:24:39.484625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-217112"
	
	
	==> kube-proxy [6d6b9e7b7ec7e051e3898d6dda632ac709de177e6cf574f854be4516d37d9474] <==
	I1204 23:25:02.022083       1 server_linux.go:66] "Using iptables proxy"
	I1204 23:25:02.186735       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1204 23:25:02.186812       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:25:02.297631       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1204 23:25:02.297743       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:25:02.300120       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:25:02.300548       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:25:02.300597       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:25:02.301932       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:25:02.301969       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:25:02.302008       1 config.go:199] "Starting service config controller"
	I1204 23:25:02.302021       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:25:02.302064       1 config.go:328] "Starting node config controller"
	I1204 23:25:02.302087       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:25:02.402815       1 shared_informer.go:320] Caches are synced for node config
	I1204 23:25:02.402857       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:25:02.402884       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [83b54047e98f32b470ca53fa0d40f24b0500afcdbf83eeca1645c04c82e36546] <==
	I1204 23:24:20.101944       1 server_linux.go:66] "Using iptables proxy"
	I1204 23:24:22.697530       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1204 23:24:22.699730       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:24:22.822797       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1204 23:24:22.822869       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:24:22.826141       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:24:22.826617       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:24:22.826681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:24:22.828076       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:24:22.828117       1 config.go:328] "Starting node config controller"
	I1204 23:24:22.828167       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:24:22.828188       1 config.go:199] "Starting service config controller"
	I1204 23:24:22.828228       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:24:22.828308       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:24:22.928449       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:24:22.928502       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:24:22.928536       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [424d3ae63cb29ac537af79b29df1e257972e667fae47bb299c75b474f6cce579] <==
	I1204 23:24:59.006175       1 serving.go:386] Generated self-signed cert in-memory
	W1204 23:25:00.939479       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 23:25:00.939590       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 23:25:00.939609       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 23:25:00.939620       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:25:01.008193       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 23:25:01.008220       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:25:01.011168       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:25:01.011217       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:25:01.011314       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 23:25:01.011379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 23:25:01.111889       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [49679cf32ad2f4779b6dbad26919cb0ddbb369fa4e03b2c98c1567dcac252369] <==
	I1204 23:24:21.117921       1 serving.go:386] Generated self-signed cert in-memory
	W1204 23:24:22.559724       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 23:24:22.559757       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 23:24:22.559767       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 23:24:22.559774       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:24:22.705121       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 23:24:22.705212       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:24:22.707570       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:24:22.707616       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:24:22.707846       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 23:24:22.707971       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 23:24:22.807850       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:24:48.269696       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1204 23:24:48.269806       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1204 23:24:48.270006       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 04 23:27:17 functional-217112 kubelet[5314]: E1204 23:27:17.637037    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354837636830909,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:27:27 functional-217112 kubelet[5314]: E1204 23:27:27.638325    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354847638171887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:27:27 functional-217112 kubelet[5314]: E1204 23:27:27.638370    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354847638171887,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:27:37 functional-217112 kubelet[5314]: E1204 23:27:37.639665    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354857639489699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:27:37 functional-217112 kubelet[5314]: E1204 23:27:37.639719    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354857639489699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:27:42 functional-217112 kubelet[5314]: E1204 23:27:42.613701    5314 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 04 23:27:42 functional-217112 kubelet[5314]: E1204 23:27:42.613784    5314 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 04 23:27:42 functional-217112 kubelet[5314]: E1204 23:27:42.614071    5314 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgm5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-dhqbq_default(0a47658f-c2ca-401f-aad4-4a7152c97d57): ErrImagePull: reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 04 23:27:42 functional-217112 kubelet[5314]: E1204 23:27:42.615438    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-dhqbq" podUID="0a47658f-c2ca-401f-aad4-4a7152c97d57"
	Dec 04 23:27:43 functional-217112 kubelet[5314]: E1204 23:27:43.009487    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\"\"" pod="default/mysql-6cdb49bbb-dhqbq" podUID="0a47658f-c2ca-401f-aad4-4a7152c97d57"
	Dec 04 23:27:47 functional-217112 kubelet[5314]: E1204 23:27:47.641194    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354867641010847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:27:47 functional-217112 kubelet[5314]: E1204 23:27:47.641240    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354867641010847,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:27:57 functional-217112 kubelet[5314]: E1204 23:27:57.642725    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354877642495056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:27:57 functional-217112 kubelet[5314]: E1204 23:27:57.642769    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354877642495056,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:28:07 functional-217112 kubelet[5314]: E1204 23:28:07.644399    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354887644175634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:28:07 functional-217112 kubelet[5314]: E1204 23:28:07.644445    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354887644175634,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:28:17 functional-217112 kubelet[5314]: E1204 23:28:17.646123    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354897645909822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:28:17 functional-217112 kubelet[5314]: E1204 23:28:17.646175    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354897645909822,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:28:19 functional-217112 kubelet[5314]: E1204 23:28:19.553136    5314 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 04 23:28:19 functional-217112 kubelet[5314]: E1204 23:28:19.553223    5314 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 04 23:28:19 functional-217112 kubelet[5314]: E1204 23:28:19.553467    5314 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hblcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(6301f750-dc04-406b-ab42-9c3fd9a1112e): ErrImagePull: loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 04 23:28:19 functional-217112 kubelet[5314]: E1204 23:28:19.554856    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="6301f750-dc04-406b-ab42-9c3fd9a1112e"
	Dec 04 23:28:27 functional-217112 kubelet[5314]: E1204 23:28:27.647682    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354907647481158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:28:27 functional-217112 kubelet[5314]: E1204 23:28:27.647717    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733354907647481158,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:28:32 functional-217112 kubelet[5314]: E1204 23:28:32.513606    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6301f750-dc04-406b-ab42-9c3fd9a1112e"
	
	
	==> kubernetes-dashboard [6779d83a3960a00bdfcf7aa67dad0bd2dee7f96ea22d36b994855daccffce816] <==
	2024/12/04 23:25:39 Using namespace: kubernetes-dashboard
	2024/12/04 23:25:39 Using in-cluster config to connect to apiserver
	2024/12/04 23:25:39 Using secret token for csrf signing
	2024/12/04 23:25:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/04 23:25:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/04 23:25:40 Successful initial request to the apiserver, version: v1.31.2
	2024/12/04 23:25:40 Generating JWE encryption key
	2024/12/04 23:25:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/04 23:25:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/04 23:25:40 Initializing JWE encryption key from synchronized object
	2024/12/04 23:25:40 Creating in-cluster Sidecar client
	2024/12/04 23:25:40 Serving insecurely on HTTP port: 9090
	2024/12/04 23:25:40 Successful request to sidecar
	2024/12/04 23:25:39 Starting overwatch
	
	
	==> storage-provisioner [3d828f0588fd4a6f3bd830e4a85bded49f537f7fd3e1ffc7b9ab7cc470b7430a] <==
	I1204 23:24:36.480898       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:24:36.487887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:24:36.487937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [bf56b586908d2bb939a3ba1dae595b2ff931be4fe3fca39b13aeeb41795c47b0] <==
	I1204 23:25:01.927608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:25:01.996481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:25:01.996545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 23:25:19.395326       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 23:25:19.395410       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"de9bf37a-110c-46e5-add4-6bb479e10e0d", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-217112_9a054cd3-5a0a-4ac0-abd4-9d1c31388fa3 became leader
	I1204 23:25:19.395540       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-217112_9a054cd3-5a0a-4ac0-abd4-9d1c31388fa3!
	I1204 23:25:19.496436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-217112_9a054cd3-5a0a-4ac0-abd4-9d1c31388fa3!
	I1204 23:25:33.897467       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1204 23:25:33.897651       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"4ac7decc-af81-42c7-902e-b44c5395dedb", APIVersion:"v1", ResourceVersion:"788", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1204 23:25:33.897565       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    bf625bb0-8aca-4fe0-aeaf-9131a0cd7e96 388 0 2024-12-04 23:23:53 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-04 23:23:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-4ac7decc-af81-42c7-902e-b44c5395dedb &PersistentVolumeClaim{ObjectMeta:{myclaim  default  4ac7decc-af81-42c7-902e-b44c5395dedb 788 0 2024-12-04 23:25:33 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-04 23:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-04 23:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1204 23:25:33.898048       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-4ac7decc-af81-42c7-902e-b44c5395dedb" provisioned
	I1204 23:25:33.898080       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1204 23:25:33.898087       1 volume_store.go:212] Trying to save persistentvolume "pvc-4ac7decc-af81-42c7-902e-b44c5395dedb"
	I1204 23:25:33.908002       1 volume_store.go:219] persistentvolume "pvc-4ac7decc-af81-42c7-902e-b44c5395dedb" saved
	I1204 23:25:33.908111       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"4ac7decc-af81-42c7-902e-b44c5395dedb", APIVersion:"v1", ResourceVersion:"788", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4ac7decc-af81-42c7-902e-b44c5395dedb
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217112 -n functional-217112
helpers_test.go:261: (dbg) Run:  kubectl --context functional-217112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-dhqbq nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-217112 describe pod busybox-mount mysql-6cdb49bbb-dhqbq nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-217112 describe pod busybox-mount mysql-6cdb49bbb-dhqbq nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217112/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:25:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://107f9286647084f7b2532f41552a5b90e99ac711309956b364d67e653f0f351c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 04 Dec 2024 23:25:33 +0000
	      Finished:     Wed, 04 Dec 2024 23:25:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b96b9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b96b9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3m5s  default-scheduler  Successfully assigned default/busybox-mount to functional-217112
	  Normal  Pulling    3m5s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     3m3s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.087s (1.087s including waiting). Image size: 4631262 bytes.
	  Normal  Created    3m3s  kubelet            Created container mount-munger
	  Normal  Started    3m3s  kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-dhqbq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217112/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:25:47 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgm5b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wgm5b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m48s                default-scheduler  Successfully assigned default/mysql-6cdb49bbb-dhqbq to functional-217112
	  Warning  Failed     54s                  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     54s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    53s                  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     53s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    41s (x2 over 2m48s)  kubelet            Pulling image "docker.io/mysql:5.7"
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217112/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:25:40 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6k5c4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6k5c4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  2m56s                default-scheduler  Successfully assigned default/nginx-svc to functional-217112
	  Warning  Failed     85s                  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     85s                  kubelet            Error: ErrImagePull
	  Normal   BackOff    84s                  kubelet            Back-off pulling image "docker.io/nginx:alpine"
	  Warning  Failed     84s                  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    70s (x2 over 2m56s)  kubelet            Pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217112/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:25:34 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ErrImagePull
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hblcf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hblcf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m2s                 default-scheduler  Successfully assigned default/sp-pod to functional-217112
	  Warning  Failed     115s                 kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    102s (x2 over 3m2s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     17s (x2 over 115s)   kubelet            Error: ErrImagePull
	  Warning  Failed     17s                  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    4s (x2 over 115s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     4s (x2 over 115s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
E1204 23:28:51.635715  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:29:19.339470  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (189.01s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-217112 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-dhqbq" [0a47658f-c2ca-401f-aad4-4a7152c97d57] Pending
helpers_test.go:344: "mysql-6cdb49bbb-dhqbq" [0a47658f-c2ca-401f-aad4-4a7152c97d57] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:329: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1799: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1799: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217112 -n functional-217112
functional_test.go:1799: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2024-12-04 23:35:48.170032889 +0000 UTC m=+1508.036018001
functional_test.go:1799: (dbg) Run:  kubectl --context functional-217112 describe po mysql-6cdb49bbb-dhqbq -n default
functional_test.go:1799: (dbg) kubectl --context functional-217112 describe po mysql-6cdb49bbb-dhqbq -n default:
Name:             mysql-6cdb49bbb-dhqbq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-217112/192.168.49.2
Start Time:       Wed, 04 Dec 2024 23:25:47 +0000
Labels:           app=mysql
pod-template-hash=6cdb49bbb
Annotations:      <none>
Status:           Pending
IP:               10.244.0.11
IPs:
IP:           10.244.0.11
Controlled By:  ReplicaSet/mysql-6cdb49bbb
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgm5b (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wgm5b:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-dhqbq to functional-217112
Normal   Pulling    3m51s (x4 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     2m39s (x4 over 8m6s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     2m39s (x4 over 8m6s)  kubelet            Error: ErrImagePull
Normal   BackOff    2m13s (x7 over 8m5s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     2m13s (x7 over 8m5s)  kubelet            Error: ImagePullBackOff
functional_test.go:1799: (dbg) Run:  kubectl --context functional-217112 logs mysql-6cdb49bbb-dhqbq -n default
functional_test.go:1799: (dbg) Non-zero exit: kubectl --context functional-217112 logs mysql-6cdb49bbb-dhqbq -n default: exit status 1 (70.473871ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-6cdb49bbb-dhqbq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1799: kubectl --context functional-217112 logs mysql-6cdb49bbb-dhqbq -n default: exit status 1
functional_test.go:1801: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-217112
helpers_test.go:235: (dbg) docker inspect functional-217112:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d",
	        "Created": "2024-12-04T23:23:35.251060553Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 417652,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-04T23:23:35.361950648Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1a0bf2062289d31d12b734a031220306d830691a529a6eae8b4c8f4049e20571",
	        "ResolvConfPath": "/var/lib/docker/containers/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/hostname",
	        "HostsPath": "/var/lib/docker/containers/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/hosts",
	        "LogPath": "/var/lib/docker/containers/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d-json.log",
	        "Name": "/functional-217112",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-217112:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-217112",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/53517a903faf4426a108e32aacf93a7831214da42accee7845942d37143d3e38-init/diff:/var/lib/docker/overlay2/e1057f3484b1ab78c06169089ecae0d5a5ffb4d6954d3cd93f0938b7adf18020/diff",
	                "MergedDir": "/var/lib/docker/overlay2/53517a903faf4426a108e32aacf93a7831214da42accee7845942d37143d3e38/merged",
	                "UpperDir": "/var/lib/docker/overlay2/53517a903faf4426a108e32aacf93a7831214da42accee7845942d37143d3e38/diff",
	                "WorkDir": "/var/lib/docker/overlay2/53517a903faf4426a108e32aacf93a7831214da42accee7845942d37143d3e38/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-217112",
	                "Source": "/var/lib/docker/volumes/functional-217112/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-217112",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-217112",
	                "name.minikube.sigs.k8s.io": "functional-217112",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "caf95b5fe10d8115471aa1948f34dfdf0fc7cfca2d1234dd7d465142b2a850ce",
	            "SandboxKey": "/var/run/docker/netns/caf95b5fe10d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-217112": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "e6fb0d9aa5dde7a6d493a7cdef55ddb4085e0abeaf7ed2eb640ed29f590a10b5",
	                    "EndpointID": "17a326823eedbd53b0fc9c72a3d26fde7dc11d0c17915848a9b3190c80c38268",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-217112",
	                        "e66042d8eed4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-217112 -n functional-217112
helpers_test.go:244: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p functional-217112 logs -n 25: (1.482286927s)
helpers_test.go:252: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                    Args                                    |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-217112 image ls                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	| image          | functional-217112 image load --daemon                                      | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | kicbase/echo-server:functional-217112                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112 image ls                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	| image          | functional-217112 image save kicbase/echo-server:functional-217112         | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112 image rm                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | kicbase/echo-server:functional-217112                                      |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112 image ls                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	| image          | functional-217112 image load                                               | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/ssl/certs/387894.pem                                                  |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /usr/share/ca-certificates/387894.pem                                      |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/ssl/certs/51391683.0                                                  |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/ssl/certs/3878942.pem                                                 |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /usr/share/ca-certificates/3878942.pem                                     |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                                  |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh sudo cat                                             | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | /etc/test/nested/copy/387894/hosts                                         |                   |         |         |                     |                     |
	| service        | functional-217112 service                                                  | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | hello-node-connect --url                                                   |                   |         |         |                     |                     |
	| image          | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | image ls --format short                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | image ls --format yaml                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | image ls --format json                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| image          | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | image ls --format table                                                    |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                          |                   |         |         |                     |                     |
	| ssh            | functional-217112 ssh pgrep                                                | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC |                     |
	|                | buildkitd                                                                  |                   |         |         |                     |                     |
	| image          | functional-217112 image build -t                                           | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | localhost/my-image:functional-217112                                       |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                           |                   |         |         |                     |                     |
	| image          | functional-217112 image ls                                                 | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	| update-context | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	| update-context | functional-217112                                                          | functional-217112 | jenkins | v1.34.0 | 04 Dec 24 23:25 UTC | 04 Dec 24 23:25 UTC |
	|                | update-context                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                     |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:25:31
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:25:31.172229  428722 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:25:31.172330  428722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:31.172335  428722 out.go:358] Setting ErrFile to fd 2...
	I1204 23:25:31.172346  428722 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:31.172524  428722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:25:31.173153  428722 out.go:352] Setting JSON to false
	I1204 23:25:31.174455  428722 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7680,"bootTime":1733347051,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:25:31.174522  428722 start.go:139] virtualization: kvm guest
	I1204 23:25:31.176568  428722 out.go:177] * [functional-217112] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:25:31.178786  428722 notify.go:220] Checking for updates...
	I1204 23:25:31.178798  428722 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:25:31.180671  428722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:25:31.182599  428722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:25:31.184088  428722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:25:31.185573  428722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:25:31.187161  428722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:25:31.188951  428722 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:25:31.189454  428722 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:25:31.212463  428722 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:25:31.212628  428722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:25:31.266282  428722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-04 23:25:31.255749052 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:25:31.266405  428722 docker.go:318] overlay module found
	I1204 23:25:31.268198  428722 out.go:177] * Using the docker driver based on existing profile
	I1204 23:25:31.269602  428722 start.go:297] selected driver: docker
	I1204 23:25:31.269625  428722 start.go:901] validating driver "docker" against &{Name:functional-217112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-217112 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:25:31.269760  428722 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:25:31.269875  428722 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:25:31.328533  428722 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-04 23:25:31.318399263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:25:31.329517  428722 cni.go:84] Creating CNI manager for ""
	I1204 23:25:31.329597  428722 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:25:31.329714  428722 start.go:340] cluster config:
	{Name:functional-217112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-217112 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:25:31.332139  428722 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 04 23:34:32 functional-217112 crio[4905]: time="2024-12-04 23:34:32.512689082Z" level=info msg="Image docker.io/mysql:5.7 not found" id=d046f144-835f-452f-8733-024ca160b3f7 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:34:39 functional-217112 crio[4905]: time="2024-12-04 23:34:39.151755564Z" level=info msg="Pulling image: docker.io/mysql:5.7" id=7d8069d4-6b20-4e21-9eba-a946dc487e49 name=/runtime.v1.ImageService/PullImage
	Dec 04 23:34:39 functional-217112 crio[4905]: time="2024-12-04 23:34:39.153059630Z" level=info msg="Trying to access \"docker.io/library/mysql:5.7\""
	Dec 04 23:34:39 functional-217112 crio[4905]: time="2024-12-04 23:34:39.512694430Z" level=info msg="Checking image status: docker.io/nginx:latest" id=a7cac2f3-2785-41f5-84cf-9992b1cd7a90 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:34:39 functional-217112 crio[4905]: time="2024-12-04 23:34:39.512975176Z" level=info msg="Image docker.io/nginx:latest not found" id=a7cac2f3-2785-41f5-84cf-9992b1cd7a90 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:34:50 functional-217112 crio[4905]: time="2024-12-04 23:34:50.513086459Z" level=info msg="Checking image status: docker.io/nginx:latest" id=4c810299-e91e-46d2-968a-871ecd534c0e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:34:50 functional-217112 crio[4905]: time="2024-12-04 23:34:50.513297970Z" level=info msg="Image docker.io/nginx:latest not found" id=4c810299-e91e-46d2-968a-871ecd534c0e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:34:52 functional-217112 crio[4905]: time="2024-12-04 23:34:52.513149919Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=1223b421-bd72-4a22-8828-b4343ae83d26 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:34:52 functional-217112 crio[4905]: time="2024-12-04 23:34:52.513412353Z" level=info msg="Image docker.io/nginx:alpine not found" id=1223b421-bd72-4a22-8828-b4343ae83d26 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:01 functional-217112 crio[4905]: time="2024-12-04 23:35:01.513106355Z" level=info msg="Checking image status: docker.io/nginx:latest" id=74ac54fa-9421-422d-8e71-4285a6a5f090 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:01 functional-217112 crio[4905]: time="2024-12-04 23:35:01.513319269Z" level=info msg="Image docker.io/nginx:latest not found" id=74ac54fa-9421-422d-8e71-4285a6a5f090 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:07 functional-217112 crio[4905]: time="2024-12-04 23:35:07.512802958Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=47535115-ee2a-429b-b2b2-1c2549685311 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:07 functional-217112 crio[4905]: time="2024-12-04 23:35:07.513064936Z" level=info msg="Image docker.io/nginx:alpine not found" id=47535115-ee2a-429b-b2b2-1c2549685311 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:14 functional-217112 crio[4905]: time="2024-12-04 23:35:14.512371859Z" level=info msg="Checking image status: docker.io/nginx:latest" id=8ba381ef-2bf5-41eb-9430-3767bd972c8a name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:14 functional-217112 crio[4905]: time="2024-12-04 23:35:14.512609629Z" level=info msg="Image docker.io/nginx:latest not found" id=8ba381ef-2bf5-41eb-9430-3767bd972c8a name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:18 functional-217112 crio[4905]: time="2024-12-04 23:35:18.512812521Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=f9aa89e7-6381-4662-b9bc-4c8a81762fa8 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:18 functional-217112 crio[4905]: time="2024-12-04 23:35:18.513100526Z" level=info msg="Image docker.io/nginx:alpine not found" id=f9aa89e7-6381-4662-b9bc-4c8a81762fa8 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:27 functional-217112 crio[4905]: time="2024-12-04 23:35:27.512697186Z" level=info msg="Checking image status: docker.io/nginx:latest" id=17814a12-43d1-4f91-ad0d-e65960c7f16a name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:27 functional-217112 crio[4905]: time="2024-12-04 23:35:27.512945942Z" level=info msg="Image docker.io/nginx:latest not found" id=17814a12-43d1-4f91-ad0d-e65960c7f16a name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:32 functional-217112 crio[4905]: time="2024-12-04 23:35:32.512333573Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=704815d0-0bcb-43e9-b36f-13433de83491 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:32 functional-217112 crio[4905]: time="2024-12-04 23:35:32.512567543Z" level=info msg="Image docker.io/nginx:alpine not found" id=704815d0-0bcb-43e9-b36f-13433de83491 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:39 functional-217112 crio[4905]: time="2024-12-04 23:35:39.513080009Z" level=info msg="Checking image status: docker.io/nginx:latest" id=57da4629-66d1-4814-be77-453d9529226e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:39 functional-217112 crio[4905]: time="2024-12-04 23:35:39.513375161Z" level=info msg="Image docker.io/nginx:latest not found" id=57da4629-66d1-4814-be77-453d9529226e name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:47 functional-217112 crio[4905]: time="2024-12-04 23:35:47.512963446Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=ed33b622-5c09-4848-b659-498241683714 name=/runtime.v1.ImageService/ImageStatus
	Dec 04 23:35:47 functional-217112 crio[4905]: time="2024-12-04 23:35:47.513246818Z" level=info msg="Image docker.io/nginx:alpine not found" id=ed33b622-5c09-4848-b659-498241683714 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	62c873dfde2b8       82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410                                                 10 minutes ago      Running             echoserver                  0                   c9846a5cf296f       hello-node-connect-67bdd5bbb4-49tsm
	6779d83a3960a       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   d0a3eb8ee5df2       kubernetes-dashboard-695b96c756-dxnz5
	56f5c1f7fd931       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   3f416f29a75ca       dashboard-metrics-scraper-c5db448b4-drbkx
	107f928664708       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   789035c7befed       busybox-mount
	fbb8647bcc7ae       registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969               10 minutes ago      Running             echoserver                  0                   23d1d0ff61450       hello-node-6b9f76b5c7-4sch9
	c646c17e71e96       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 10 minutes ago      Running             coredns                     2                   02006c023c3ff       coredns-7c65d6cfc9-q2jnj
	ed6bba6cae1cc       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5                                                 10 minutes ago      Running             kindnet-cni                 2                   a6508fd5d31da       kindnet-98kqg
	6d6b9e7b7ec7e       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 10 minutes ago      Running             kube-proxy                  2                   53ea08a96d403       kube-proxy-9xwqd
	bf56b586908d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         3                   abc8620e8793f       storage-provisioner
	61355d6627de6       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                 10 minutes ago      Running             kube-apiserver              0                   3329fe5dd914e       kube-apiserver-functional-217112
	424d3ae63cb29       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 10 minutes ago      Running             kube-scheduler              2                   10be6f00b6c50       kube-scheduler-functional-217112
	7b8ef4e5121d7       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 10 minutes ago      Running             etcd                        2                   fc8da3304d44c       etcd-functional-217112
	4d80a9740cde1       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 10 minutes ago      Running             kube-controller-manager     2                   83c9bc62943f8       kube-controller-manager-functional-217112
	3d828f0588fd4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         2                   abc8620e8793f       storage-provisioner
	4de5da0b9a832       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                 11 minutes ago      Exited              coredns                     1                   02006c023c3ff       coredns-7c65d6cfc9-q2jnj
	4fb493194047c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                 11 minutes ago      Exited              etcd                        1                   fc8da3304d44c       etcd-functional-217112
	49679cf32ad2f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                 11 minutes ago      Exited              kube-scheduler              1                   10be6f00b6c50       kube-scheduler-functional-217112
	83b54047e98f3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                 11 minutes ago      Exited              kube-proxy                  1                   53ea08a96d403       kube-proxy-9xwqd
	92acb1f8b1040       9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5                                                 11 minutes ago      Exited              kindnet-cni                 1                   a6508fd5d31da       kindnet-98kqg
	fbb41635f74ce       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                 11 minutes ago      Exited              kube-controller-manager     1                   83c9bc62943f8       kube-controller-manager-functional-217112
	
	
	==> coredns [4de5da0b9a832955f898fa51bf79cfd3171ec1e166e50f9d27b4979b2c5730d9] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found]
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:coredns" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:service-account-issuer-discovery" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:45874 - 17269 "HINFO IN 2228789017388824328.7205854619883508623. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02962777s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [c646c17e71e96120b48cb0d2d693b2af0f6811ebc97025d4497d220a31888ac3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54632 - 4020 "HINFO IN 4281344577431339135.2135958165029355891. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030517446s
	
	
	==> describe nodes <==
	Name:               functional-217112
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-217112
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=efbd8efc50652fe861e71899e50212cc75e3480d
	                    minikube.k8s.io/name=functional-217112
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_04T23_23_49_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 04 Dec 2024 23:23:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-217112
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 04 Dec 2024 23:35:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 04 Dec 2024 23:31:07 +0000   Wed, 04 Dec 2024 23:23:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 04 Dec 2024 23:31:07 +0000   Wed, 04 Dec 2024 23:23:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 04 Dec 2024 23:31:07 +0000   Wed, 04 Dec 2024 23:23:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 04 Dec 2024 23:31:07 +0000   Wed, 04 Dec 2024 23:24:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-217112
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859304Ki
	  pods:               110
	System Info:
	  Machine ID:                 5ae6ec90eaad4a1dad26aed9d1c00186
	  System UUID:                c6c35e83-244c-4810-aafc-b6e500875507
	  Boot ID:                    ac1c7763-4d61-4ba9-8c16-bcbc5ed122b3
	  Kernel Version:             5.15.0-1071-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-6b9f76b5c7-4sch9                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-67bdd5bbb4-49tsm          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-6cdb49bbb-dhqbq                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-7c65d6cfc9-q2jnj                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-217112                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-98kqg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-217112             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-217112    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9xwqd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-217112             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-drbkx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-dxnz5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-217112 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-217112 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node functional-217112 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-217112 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-217112 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-217112 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                node-controller  Node functional-217112 event: Registered Node functional-217112 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-217112 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-217112 event: Registered Node functional-217112 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-217112 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-217112 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-217112 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-217112 event: Registered Node functional-217112 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[Dec 4 22:54] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 d8 34 c4 9e fd 08 06
	[  +0.000456] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 16 46 91 d1 19 2f 08 06
	[ +35.699001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[Dec 4 22:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 76 3d b0 9a 20 99 08 06
	[  +0.000382] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff de 90 40 5e 28 e1 08 06
	[  +1.225322] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000021] ll header: 00000000: ff ff ff ff ff ff b2 70 f6 e4 04 7e 08 06
	[  +0.023795] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	[  +8.010933] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +18.260065] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e b7 56 b9 28 5b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ae 92 a5 ca 19 c6 08 06
	[ +24.579912] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff fa ca b1 23 b4 91 08 06
	[  +0.000531] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 3a e9 42 d7 ae 99 08 06
	
	
	==> etcd [4fb493194047c45aab296cb9ba1167a86b39fd7f9c063c655af863baf0b0be6b] <==
	{"level":"info","ts":"2024-12-04T23:24:21.402497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-12-04T23:24:21.402515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:21.402523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:21.402539Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:21.402546Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:21.403691Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-217112 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T23:24:21.403711Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:24:21.403707Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:24:21.403955Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T23:24:21.403996Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T23:24:21.404698Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T23:24:21.404942Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T23:24:21.405650Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T23:24:21.406116Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-04T23:24:48.272652Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-04T23:24:48.272714Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-217112","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-12-04T23:24:48.272808Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T23:24:48.272911Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	2024/12/04 23:24:48 WARNING: [core] [Server #7] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-12-04T23:24:48.292726Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-04T23:24:48.292815Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-04T23:24:48.292921Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-12-04T23:24:48.296096Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-04T23:24:48.296215Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-04T23:24:48.296227Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-217112","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [7b8ef4e5121d768ea4bfa13be8fca07d3e3f570404af4b5ba790e19e5690a3b2] <==
	{"level":"info","ts":"2024-12-04T23:24:58.336292Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-04T23:24:58.394890Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-04T23:24:58.394954Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-04T23:24:58.395060Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-04T23:24:58.395321Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-04T23:24:58.395418Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-04T23:24:59.925390Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:59.925436Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:59.925470Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-04T23:24:59.925482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-12-04T23:24:59.925488Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-04T23:24:59.925496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-12-04T23:24:59.925514Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-04T23:24:59.926912Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:24:59.926919Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-217112 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-04T23:24:59.926931Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-04T23:24:59.927212Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-04T23:24:59.927233Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-04T23:24:59.927900Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T23:24:59.928096Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-04T23:24:59.928652Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-04T23:24:59.928766Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-04T23:34:59.946649Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1132}
	{"level":"info","ts":"2024-12-04T23:34:59.967496Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1132,"took":"20.551169ms","hash":2736952389,"current-db-size-bytes":4374528,"current-db-size":"4.4 MB","current-db-size-in-use-bytes":1765376,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-12-04T23:34:59.967562Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2736952389,"revision":1132,"compact-revision":-1}
	
	
	==> kernel <==
	 23:35:49 up  2:18,  0 users,  load average: 0.19, 0.23, 0.47
	Linux functional-217112 5.15.0-1071-gcp #79~20.04.1-Ubuntu SMP Thu Oct 17 21:59:34 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [92acb1f8b104047dfc858069772970d1f0ead06d0d08f5afd2c81d6940fda185] <==
	I1204 23:24:20.095079       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1204 23:24:20.095358       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1204 23:24:20.095554       1 main.go:148] setting mtu 1500 for CNI 
	I1204 23:24:20.095569       1 main.go:178] kindnetd IP family: "ipv4"
	I1204 23:24:20.095581       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1204 23:24:20.495222       1 controller.go:361] Starting controller kube-network-policies
	I1204 23:24:20.495341       1 controller.go:365] Waiting for informer caches to sync
	I1204 23:24:20.495371       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1204 23:24:22.796003       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1204 23:24:22.796143       1 metrics.go:61] Registering metrics
	I1204 23:24:22.796235       1 controller.go:401] Syncing nftables rules
	I1204 23:24:30.495855       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:24:30.495932       1 main.go:301] handling current node
	I1204 23:24:40.495866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:24:40.495925       1 main.go:301] handling current node
	
	
	==> kindnet [ed6bba6cae1cc6343508b7dce043f4e05e65de874abf02a0a8005cd507858b25] <==
	I1204 23:33:42.624908       1 main.go:301] handling current node
	I1204 23:33:52.626750       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:33:52.626791       1 main.go:301] handling current node
	I1204 23:34:02.620130       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:34:02.620177       1 main.go:301] handling current node
	I1204 23:34:12.620482       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:34:12.620547       1 main.go:301] handling current node
	I1204 23:34:22.628616       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:34:22.628652       1 main.go:301] handling current node
	I1204 23:34:32.622718       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:34:32.622753       1 main.go:301] handling current node
	I1204 23:34:42.626751       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:34:42.626786       1 main.go:301] handling current node
	I1204 23:34:52.620607       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:34:52.620646       1 main.go:301] handling current node
	I1204 23:35:02.619735       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:35:02.619783       1 main.go:301] handling current node
	I1204 23:35:12.624374       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:35:12.624414       1 main.go:301] handling current node
	I1204 23:35:22.626719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:35:22.626753       1 main.go:301] handling current node
	I1204 23:35:32.627157       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:35:32.627199       1 main.go:301] handling current node
	I1204 23:35:42.626755       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1204 23:35:42.626804       1 main.go:301] handling current node
	
	
	==> kube-apiserver [61355d6627de6fec9e702f2bb2c397cae966745c3514426c12bbf387432b6192] <==
	I1204 23:25:01.031589       1 aggregator.go:171] initial CRD sync complete...
	I1204 23:25:01.031604       1 autoregister_controller.go:144] Starting autoregister controller
	I1204 23:25:01.031610       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1204 23:25:01.031617       1 cache.go:39] Caches are synced for autoregister controller
	I1204 23:25:01.031471       1 shared_informer.go:320] Caches are synced for configmaps
	I1204 23:25:01.036407       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1204 23:25:01.055435       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I1204 23:25:01.100196       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1204 23:25:01.939017       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1204 23:25:02.947100       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1204 23:25:03.059700       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1204 23:25:03.072977       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1204 23:25:03.151821       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1204 23:25:03.161537       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1204 23:25:04.503879       1 controller.go:615] quota admission added evaluator for: endpoints
	I1204 23:25:04.604257       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1204 23:25:24.007608       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.216.106"}
	I1204 23:25:27.785066       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1204 23:25:27.887496       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.111.228.160"}
	I1204 23:25:32.346799       1 controller.go:615] quota admission added evaluator for: namespaces
	I1204 23:25:32.529886       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.102.16"}
	I1204 23:25:32.543730       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.150.44"}
	I1204 23:25:40.632817       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.245.58"}
	I1204 23:25:40.843162       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.32.176"}
	I1204 23:25:47.825249       1 alloc.go:330] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.174.146"}
	
	
	==> kube-controller-manager [4d80a9740cde188ac8cb450bc33dd189481182c72cd06bb2eb7af8153cbc99a6] <==
	I1204 23:25:32.520083       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="87.008µs"
	I1204 23:25:35.744904       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.964377ms"
	I1204 23:25:35.745035       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="61.879µs"
	I1204 23:25:40.761989       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.819708ms"
	I1204 23:25:40.762064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="45.968µs"
	I1204 23:25:40.773824       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="11.61955ms"
	I1204 23:25:40.779685       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="5.797911ms"
	I1204 23:25:40.779782       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="52.173µs"
	I1204 23:25:40.781342       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="52.049µs"
	I1204 23:25:41.760882       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="6.017991ms"
	I1204 23:25:41.760967       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-node-connect-67bdd5bbb4" duration="49.171µs"
	I1204 23:25:47.869794       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="12.161467ms"
	I1204 23:25:47.874331       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="4.48654ms"
	I1204 23:25:47.874435       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="66.088µs"
	I1204 23:25:47.878262       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="44.562µs"
	I1204 23:26:01.844240       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-217112"
	I1204 23:27:43.019584       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="590.176µs"
	I1204 23:27:55.521302       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="65.556µs"
	I1204 23:29:31.521716       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="81.299µs"
	I1204 23:29:42.522578       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="90.919µs"
	I1204 23:31:07.395863       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-217112"
	I1204 23:31:18.522202       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="128.492µs"
	I1204 23:31:33.524156       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="72.448µs"
	I1204 23:33:21.521951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="97.991µs"
	I1204 23:33:35.521181       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/mysql-6cdb49bbb" duration="118.408µs"
	
	
	==> kube-controller-manager [fbb41635f74ce0412b5842c66e32d2b0cf89a9bfeb2f358bcc5fe76e5fbd8b4f] <==
	I1204 23:24:26.152677       1 shared_informer.go:320] Caches are synced for taint
	I1204 23:24:26.152751       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1204 23:24:26.152832       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-217112"
	I1204 23:24:26.152878       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1204 23:24:26.153929       1 shared_informer.go:320] Caches are synced for ephemeral
	I1204 23:24:26.155435       1 shared_informer.go:320] Caches are synced for GC
	I1204 23:24:26.157389       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 23:24:26.161106       1 shared_informer.go:320] Caches are synced for persistent volume
	I1204 23:24:26.164640       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I1204 23:24:26.172682       1 shared_informer.go:320] Caches are synced for stateful set
	I1204 23:24:26.202956       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1204 23:24:26.203003       1 shared_informer.go:320] Caches are synced for endpoint
	I1204 23:24:26.203101       1 shared_informer.go:320] Caches are synced for daemon sets
	I1204 23:24:26.203141       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1204 23:24:26.209633       1 shared_informer.go:320] Caches are synced for resource quota
	I1204 23:24:26.209639       1 shared_informer.go:320] Caches are synced for attach detach
	I1204 23:24:26.261123       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="57.900543ms"
	I1204 23:24:26.261298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="87.089µs"
	I1204 23:24:26.566520       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 23:24:26.569720       1 shared_informer.go:320] Caches are synced for garbage collector
	I1204 23:24:26.569746       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1204 23:24:27.617063       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="7.120001ms"
	I1204 23:24:27.617254       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="142.493µs"
	I1204 23:24:29.248109       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-217112"
	I1204 23:24:39.484625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-217112"
	
	
	==> kube-proxy [6d6b9e7b7ec7e051e3898d6dda632ac709de177e6cf574f854be4516d37d9474] <==
	I1204 23:25:02.022083       1 server_linux.go:66] "Using iptables proxy"
	I1204 23:25:02.186735       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1204 23:25:02.186812       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:25:02.297631       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1204 23:25:02.297743       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:25:02.300120       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:25:02.300548       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:25:02.300597       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:25:02.301932       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:25:02.301969       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:25:02.302008       1 config.go:199] "Starting service config controller"
	I1204 23:25:02.302021       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:25:02.302064       1 config.go:328] "Starting node config controller"
	I1204 23:25:02.302087       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:25:02.402815       1 shared_informer.go:320] Caches are synced for node config
	I1204 23:25:02.402857       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:25:02.402884       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [83b54047e98f32b470ca53fa0d40f24b0500afcdbf83eeca1645c04c82e36546] <==
	I1204 23:24:20.101944       1 server_linux.go:66] "Using iptables proxy"
	I1204 23:24:22.697530       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1204 23:24:22.699730       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1204 23:24:22.822797       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1204 23:24:22.822869       1 server_linux.go:169] "Using iptables Proxier"
	I1204 23:24:22.826141       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1204 23:24:22.826617       1 server.go:483] "Version info" version="v1.31.2"
	I1204 23:24:22.826681       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:24:22.828076       1 config.go:105] "Starting endpoint slice config controller"
	I1204 23:24:22.828117       1 config.go:328] "Starting node config controller"
	I1204 23:24:22.828167       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1204 23:24:22.828188       1 config.go:199] "Starting service config controller"
	I1204 23:24:22.828228       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1204 23:24:22.828308       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1204 23:24:22.928449       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1204 23:24:22.928502       1 shared_informer.go:320] Caches are synced for service config
	I1204 23:24:22.928536       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [424d3ae63cb29ac537af79b29df1e257972e667fae47bb299c75b474f6cce579] <==
	I1204 23:24:59.006175       1 serving.go:386] Generated self-signed cert in-memory
	W1204 23:25:00.939479       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 23:25:00.939590       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 23:25:00.939609       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 23:25:00.939620       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:25:01.008193       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 23:25:01.008220       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:25:01.011168       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:25:01.011217       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:25:01.011314       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 23:25:01.011379       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 23:25:01.111889       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [49679cf32ad2f4779b6dbad26919cb0ddbb369fa4e03b2c98c1567dcac252369] <==
	I1204 23:24:21.117921       1 serving.go:386] Generated self-signed cert in-memory
	W1204 23:24:22.559724       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1204 23:24:22.559757       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1204 23:24:22.559767       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1204 23:24:22.559774       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1204 23:24:22.705121       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1204 23:24:22.705212       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1204 23:24:22.707570       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1204 23:24:22.707616       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:24:22.707846       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1204 23:24:22.707971       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1204 23:24:22.807850       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1204 23:24:48.269696       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I1204 23:24:48.269806       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E1204 23:24:48.270006       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 04 23:34:57 functional-217112 kubelet[5314]: E1204 23:34:57.613918    5314 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d, memory: /docker/e66042d8eed450e70417ef1ee5d1520d476f2f34f2f974a57812cb38291afd4d/system.slice/kubelet.service"
	Dec 04 23:34:57 functional-217112 kubelet[5314]: E1204 23:34:57.708386    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355297708212795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:34:57 functional-217112 kubelet[5314]: E1204 23:34:57.708427    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355297708212795,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:01 functional-217112 kubelet[5314]: E1204 23:35:01.513607    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6301f750-dc04-406b-ab42-9c3fd9a1112e"
	Dec 04 23:35:07 functional-217112 kubelet[5314]: E1204 23:35:07.513322    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="eb24b85b-9829-45bb-9fea-250b07d13e4c"
	Dec 04 23:35:07 functional-217112 kubelet[5314]: E1204 23:35:07.709890    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355307709675574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:07 functional-217112 kubelet[5314]: E1204 23:35:07.709918    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355307709675574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:14 functional-217112 kubelet[5314]: E1204 23:35:14.512919    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6301f750-dc04-406b-ab42-9c3fd9a1112e"
	Dec 04 23:35:17 functional-217112 kubelet[5314]: E1204 23:35:17.711359    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355317711194419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:17 functional-217112 kubelet[5314]: E1204 23:35:17.711406    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355317711194419,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:18 functional-217112 kubelet[5314]: E1204 23:35:18.513398    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="eb24b85b-9829-45bb-9fea-250b07d13e4c"
	Dec 04 23:35:27 functional-217112 kubelet[5314]: E1204 23:35:27.513129    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6301f750-dc04-406b-ab42-9c3fd9a1112e"
	Dec 04 23:35:27 functional-217112 kubelet[5314]: E1204 23:35:27.712970    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355327712788349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:27 functional-217112 kubelet[5314]: E1204 23:35:27.713014    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355327712788349,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:32 functional-217112 kubelet[5314]: E1204 23:35:32.512850    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="eb24b85b-9829-45bb-9fea-250b07d13e4c"
	Dec 04 23:35:37 functional-217112 kubelet[5314]: E1204 23:35:37.714537    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355337714305781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:37 functional-217112 kubelet[5314]: E1204 23:35:37.714590    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355337714305781,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:39 functional-217112 kubelet[5314]: E1204 23:35:39.513670    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="6301f750-dc04-406b-ab42-9c3fd9a1112e"
	Dec 04 23:35:40 functional-217112 kubelet[5314]: E1204 23:35:40.333536    5314 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 04 23:35:40 functional-217112 kubelet[5314]: E1204 23:35:40.333602    5314 kuberuntime_image.go:55] "Failed to pull image" err="loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Dec 04 23:35:40 functional-217112 kubelet[5314]: E1204 23:35:40.333732    5314 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:mysql,Image:docker.io/mysql:5.7,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mysql,HostPort:0,ContainerPort:3306,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:MYSQL_ROOT_PASSWORD,Value:password,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{700 -3} {<nil>} 700m DecimalSI},memory: {{734003200 0} {<nil>} 700Mi BinarySI},},Requests:ResourceList{cpu: {{600 -3} {<nil>} 600m DecimalSI},memory: {{536870912 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wgm5b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext
:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mysql-6cdb49bbb-dhqbq_default(0a47658f-c2ca-401f-aad4-4a7152c97d57): ErrImagePull: loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 04 23:35:40 functional-217112 kubelet[5314]: E1204 23:35:40.334914    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"loading manifest for target platform: reading manifest sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/mysql-6cdb49bbb-dhqbq" podUID="0a47658f-c2ca-401f-aad4-4a7152c97d57"
	Dec 04 23:35:47 functional-217112 kubelet[5314]: E1204 23:35:47.513546    5314 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nginx\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx:alpine\\\"\"" pod="default/nginx-svc" podUID="eb24b85b-9829-45bb-9fea-250b07d13e4c"
	Dec 04 23:35:47 functional-217112 kubelet[5314]: E1204 23:35:47.715880    5314 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355347715727379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 04 23:35:47 functional-217112 kubelet[5314]: E1204 23:35:47.715917    5314 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733355347715727379,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:236043,},InodesUsed:&UInt64Value{Value:122,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [6779d83a3960a00bdfcf7aa67dad0bd2dee7f96ea22d36b994855daccffce816] <==
	2024/12/04 23:25:39 Starting overwatch
	2024/12/04 23:25:39 Using namespace: kubernetes-dashboard
	2024/12/04 23:25:39 Using in-cluster config to connect to apiserver
	2024/12/04 23:25:39 Using secret token for csrf signing
	2024/12/04 23:25:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/04 23:25:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/04 23:25:40 Successful initial request to the apiserver, version: v1.31.2
	2024/12/04 23:25:40 Generating JWE encryption key
	2024/12/04 23:25:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/04 23:25:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/04 23:25:40 Initializing JWE encryption key from synchronized object
	2024/12/04 23:25:40 Creating in-cluster Sidecar client
	2024/12/04 23:25:40 Serving insecurely on HTTP port: 9090
	2024/12/04 23:25:40 Successful request to sidecar
	
	
	==> storage-provisioner [3d828f0588fd4a6f3bd830e4a85bded49f537f7fd3e1ffc7b9ab7cc470b7430a] <==
	I1204 23:24:36.480898       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:24:36.487887       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:24:36.487937       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [bf56b586908d2bb939a3ba1dae595b2ff931be4fe3fca39b13aeeb41795c47b0] <==
	I1204 23:25:01.927608       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1204 23:25:01.996481       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1204 23:25:01.996545       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1204 23:25:19.395326       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1204 23:25:19.395410       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"de9bf37a-110c-46e5-add4-6bb479e10e0d", APIVersion:"v1", ResourceVersion:"648", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-217112_9a054cd3-5a0a-4ac0-abd4-9d1c31388fa3 became leader
	I1204 23:25:19.395540       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-217112_9a054cd3-5a0a-4ac0-abd4-9d1c31388fa3!
	I1204 23:25:19.496436       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-217112_9a054cd3-5a0a-4ac0-abd4-9d1c31388fa3!
	I1204 23:25:33.897467       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1204 23:25:33.897651       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"4ac7decc-af81-42c7-902e-b44c5395dedb", APIVersion:"v1", ResourceVersion:"788", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1204 23:25:33.897565       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    bf625bb0-8aca-4fe0-aeaf-9131a0cd7e96 388 0 2024-12-04 23:23:53 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-04 23:23:53 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-4ac7decc-af81-42c7-902e-b44c5395dedb &PersistentVolumeClaim{ObjectMeta:{myclaim  default  4ac7decc-af81-42c7-902e-b44c5395dedb 788 0 2024-12-04 23:25:33 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-04 23:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-04 23:25:33 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1204 23:25:33.898048       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-4ac7decc-af81-42c7-902e-b44c5395dedb" provisioned
	I1204 23:25:33.898080       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1204 23:25:33.898087       1 volume_store.go:212] Trying to save persistentvolume "pvc-4ac7decc-af81-42c7-902e-b44c5395dedb"
	I1204 23:25:33.908002       1 volume_store.go:219] persistentvolume "pvc-4ac7decc-af81-42c7-902e-b44c5395dedb" saved
	I1204 23:25:33.908111       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"4ac7decc-af81-42c7-902e-b44c5395dedb", APIVersion:"v1", ResourceVersion:"788", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4ac7decc-af81-42c7-902e-b44c5395dedb
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217112 -n functional-217112
helpers_test.go:261: (dbg) Run:  kubectl --context functional-217112 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount mysql-6cdb49bbb-dhqbq nginx-svc sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-217112 describe pod busybox-mount mysql-6cdb49bbb-dhqbq nginx-svc sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-217112 describe pod busybox-mount mysql-6cdb49bbb-dhqbq nginx-svc sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217112/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:25:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://107f9286647084f7b2532f41552a5b90e99ac711309956b364d67e653f0f351c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Wed, 04 Dec 2024 23:25:33 +0000
	      Finished:     Wed, 04 Dec 2024 23:25:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b96b9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-b96b9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-217112
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.087s (1.087s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-6cdb49bbb-dhqbq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217112/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:25:47 +0000
	Labels:           app=mysql
	                  pod-template-hash=6cdb49bbb
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.11
	IPs:
	  IP:           10.244.0.11
	Controlled By:  ReplicaSet/mysql-6cdb49bbb
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wgm5b (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wgm5b:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-6cdb49bbb-dhqbq to functional-217112
	  Normal   Pulling    3m53s (x4 over 10m)   kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m41s (x4 over 8m8s)  kubelet            Failed to pull image "docker.io/mysql:5.7": reading manifest 5.7 in docker.io/library/mysql: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m41s (x4 over 8m8s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    2m15s (x7 over 8m7s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m15s (x7 over 8m7s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             nginx-svc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217112/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:25:40 +0000
	Labels:           run=nginx-svc
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6k5c4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-6k5c4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/nginx-svc to functional-217112
	  Normal   Pulling    4m24s (x4 over 10m)    kubelet            Pulling image "docker.io/nginx:alpine"
	  Warning  Failed     3m12s (x4 over 8m39s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m12s (x4 over 8m39s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m46s (x7 over 8m38s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3s (x16 over 8m38s)    kubelet            Back-off pulling image "docker.io/nginx:alpine"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-217112/192.168.49.2
	Start Time:       Wed, 04 Dec 2024 23:25:34 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hblcf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-hblcf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-217112
	  Warning  Failed     9m9s                   kubelet            Failed to pull image "docker.io/nginx": initializing source docker://nginx:latest: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   Pulling    5m3s (x4 over 10m)     kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m42s (x4 over 9m9s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m42s (x3 over 7m31s)  kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     3m1s (x7 over 9m9s)    kubelet            Error: ImagePullBackOff
	  Normal   BackOff    11s (x17 over 9m9s)    kubelet            Back-off pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/MySQL FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/MySQL (602.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-217112 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [eb24b85b-9829-45bb-9fea-250b07d13e4c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:329: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: WARNING: pod list for "default" "run=nginx-svc" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_tunnel_test.go:216: ***** TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: pod "run=nginx-svc" failed to start within 4m0s: context deadline exceeded ****
functional_test_tunnel_test.go:216: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-217112 -n functional-217112
functional_test_tunnel_test.go:216: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: showing logs for failed pods as of 2024-12-04 23:29:40.937612007 +0000 UTC m=+1140.803597107
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-217112 describe po nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) kubectl --context functional-217112 describe po nginx-svc -n default:
Name:             nginx-svc
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-217112/192.168.49.2
Start Time:       Wed, 04 Dec 2024 23:25:40 +0000
Labels:           run=nginx-svc
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:  10.244.0.9
Containers:
nginx:
Container ID:   
Image:          docker.io/nginx:alpine
Image ID:       
Port:           80/TCP
Host Port:      0/TCP
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6k5c4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-6k5c4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  4m                   default-scheduler  Successfully assigned default/nginx-svc to functional-217112
Warning  Failed     51s (x2 over 2m30s)  kubelet            Failed to pull image "docker.io/nginx:alpine": reading manifest alpine in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     51s (x2 over 2m30s)  kubelet            Error: ErrImagePull
Normal   BackOff    37s (x2 over 2m29s)  kubelet            Back-off pulling image "docker.io/nginx:alpine"
Warning  Failed     37s (x2 over 2m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    25s (x3 over 4m1s)   kubelet            Pulling image "docker.io/nginx:alpine"
functional_test_tunnel_test.go:216: (dbg) Run:  kubectl --context functional-217112 logs nginx-svc -n default
functional_test_tunnel_test.go:216: (dbg) Non-zero exit: kubectl --context functional-217112 logs nginx-svc -n default: exit status 1 (63.439576ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "nginx" in pod "nginx-svc" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:216: kubectl --context functional-217112 logs nginx-svc -n default: exit status 1
functional_test_tunnel_test.go:217: wait: run=nginx-svc within 4m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (240.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Non-zero exit: docker pull kicbase/echo-server:1.0: exit status 1 (427.323391ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:344: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image load --daemon kicbase/echo-server:functional-217112 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-217112" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image load --daemon kicbase/echo-server:functional-217112 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image ls
functional_test.go:446: expected "kicbase/echo-server:functional-217112" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:235: (dbg) Non-zero exit: docker pull kicbase/echo-server:latest: exit status 1 (419.7287ms)

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
functional_test.go:237: failed to setup test (pull image): exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image save kicbase/echo-server:functional-217112 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:386: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:411: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1204 23:25:45.711146  432825 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:25:45.711289  432825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:45.711300  432825 out.go:358] Setting ErrFile to fd 2...
	I1204 23:25:45.711305  432825 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:45.711474  432825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:25:45.712098  432825 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:25:45.712214  432825 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:25:45.712564  432825 cli_runner.go:164] Run: docker container inspect functional-217112 --format={{.State.Status}}
	I1204 23:25:45.729680  432825 ssh_runner.go:195] Run: systemctl --version
	I1204 23:25:45.729752  432825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217112
	I1204 23:25:45.747866  432825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/functional-217112/id_rsa Username:docker}
	I1204 23:25:45.839357  432825 cache_images.go:289] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1204 23:25:45.839425  432825 cache_images.go:253] Failed to load cached images for "functional-217112": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1204 23:25:45.839447  432825 cache_images.go:265] failed pushing to: functional-217112

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-217112
functional_test.go:419: (dbg) Non-zero exit: docker rmi kicbase/echo-server:functional-217112: exit status 1 (17.532156ms)

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-217112

                                                
                                                
** /stderr **
functional_test.go:421: failed to remove image from docker: exit status 1

                                                
                                                
** stderr ** 
	Error response from daemon: No such image: kicbase/echo-server:functional-217112

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (75.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
I1204 23:29:41.066422  387894 retry.go:31] will retry after 1.504116409s: Temporary Error: Get "http:": http: no Host in request URL
I1204 23:29:42.570763  387894 retry.go:31] will retry after 6.087409596s: Temporary Error: Get "http:": http: no Host in request URL
I1204 23:29:48.658653  387894 retry.go:31] will retry after 4.947811187s: Temporary Error: Get "http:": http: no Host in request URL
I1204 23:29:53.606894  387894 retry.go:31] will retry after 12.041293212s: Temporary Error: Get "http:": http: no Host in request URL
I1204 23:30:05.649126  387894 retry.go:31] will retry after 8.669925931s: Temporary Error: Get "http:": http: no Host in request URL
I1204 23:30:14.319250  387894 retry.go:31] will retry after 14.395470886s: Temporary Error: Get "http:": http: no Host in request URL
I1204 23:30:28.715433  387894 retry.go:31] will retry after 28.211119278s: Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:288: failed to hit nginx at "http://": Temporary Error: Get "http:": http: no Host in request URL
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-217112 get svc nginx-svc
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
NAME        TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx-svc   LoadBalancer   10.111.245.58   10.111.245.58   80:31417/TCP   5m16s
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (75.92s)

                                                
                                    

Test pass (288/329)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 5.7
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.22
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 5.55
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.22
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.1
21 TestBinaryMirror 0.78
22 TestOffline 55.82
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 176.56
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.47
35 TestAddons/parallel/Registry 14.52
37 TestAddons/parallel/InspektorGadget 10.73
41 TestAddons/parallel/Headlamp 17.45
42 TestAddons/parallel/CloudSpanner 6.53
44 TestAddons/parallel/NvidiaDevicePlugin 5.5
45 TestAddons/parallel/Yakd 11.66
46 TestAddons/parallel/AmdGpuDevicePlugin 5.5
47 TestAddons/StoppedEnableDisable 12.11
48 TestCertOptions 32.41
49 TestCertExpiration 230.48
51 TestForceSystemdFlag 30.47
52 TestForceSystemdEnv 36.78
54 TestKVMDriverInstallOrUpdate 3.48
58 TestErrorSpam/setup 23.44
59 TestErrorSpam/start 0.58
60 TestErrorSpam/status 0.88
61 TestErrorSpam/pause 1.55
62 TestErrorSpam/unpause 1.72
63 TestErrorSpam/stop 1.37
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 40.27
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.85
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.32
75 TestFunctional/serial/CacheCmd/cache/add_local 1.36
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.7
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 33.77
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.43
86 TestFunctional/serial/LogsFileCmd 1.44
87 TestFunctional/serial/InvalidService 3.87
89 TestFunctional/parallel/ConfigCmd 0.4
90 TestFunctional/parallel/DashboardCmd 10.98
91 TestFunctional/parallel/DryRun 0.41
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.11
97 TestFunctional/parallel/ServiceCmdConnect 7.68
98 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/SSHCmd 0.55
102 TestFunctional/parallel/CpCmd 1.91
104 TestFunctional/parallel/FileSync 0.25
105 TestFunctional/parallel/CertSync 1.53
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.5
113 TestFunctional/parallel/License 0.18
114 TestFunctional/parallel/ServiceCmd/DeployApp 9.19
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.42
116 TestFunctional/parallel/ProfileCmd/profile_list 0.48
117 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
118 TestFunctional/parallel/MountCmd/any-port 7.02
119 TestFunctional/parallel/MountCmd/specific-port 1.9
120 TestFunctional/parallel/ServiceCmd/List 0.5
121 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
122 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
123 TestFunctional/parallel/ServiceCmd/Format 0.51
124 TestFunctional/parallel/MountCmd/VerifyCleanup 2.02
125 TestFunctional/parallel/ServiceCmd/URL 0.62
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.42
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/Version/short 0.05
132 TestFunctional/parallel/Version/components 0.47
133 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
134 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
135 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
136 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
137 TestFunctional/parallel/ImageCommands/ImageBuild 2.07
143 TestFunctional/parallel/ImageCommands/ImageRemove 0.5
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 100.72
161 TestMultiControlPlane/serial/DeployApp 4.75
162 TestMultiControlPlane/serial/PingHostFromPods 1.09
163 TestMultiControlPlane/serial/AddWorkerNode 35.89
164 TestMultiControlPlane/serial/NodeLabels 0.07
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
166 TestMultiControlPlane/serial/CopyFile 16.03
167 TestMultiControlPlane/serial/StopSecondaryNode 12.52
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.69
169 TestMultiControlPlane/serial/RestartSecondaryNode 33.87
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.12
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 134.11
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.39
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
174 TestMultiControlPlane/serial/StopCluster 35.59
175 TestMultiControlPlane/serial/RestartCluster 112.67
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.66
177 TestMultiControlPlane/serial/AddSecondaryNode 41.52
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
182 TestJSONOutput/start/Command 40.89
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.68
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.61
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 5.77
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.22
207 TestKicCustomNetwork/create_custom_network 27.84
208 TestKicCustomNetwork/use_default_bridge_network 25.56
209 TestKicExistingNetwork 22.94
210 TestKicCustomSubnet 25.82
211 TestKicStaticIP 24.06
212 TestMainNoArgs 0.05
213 TestMinikubeProfile 47.44
216 TestMountStart/serial/StartWithMountFirst 8.21
217 TestMountStart/serial/VerifyMountFirst 0.25
218 TestMountStart/serial/StartWithMountSecond 8.23
219 TestMountStart/serial/VerifyMountSecond 0.25
220 TestMountStart/serial/DeleteFirst 1.61
221 TestMountStart/serial/VerifyMountPostDelete 0.25
222 TestMountStart/serial/Stop 1.18
223 TestMountStart/serial/RestartStopped 7.32
224 TestMountStart/serial/VerifyMountPostStop 0.25
227 TestMultiNode/serial/FreshStart2Nodes 73.67
228 TestMultiNode/serial/DeployApp2Nodes 3.73
229 TestMultiNode/serial/PingHostFrom2Pods 0.76
230 TestMultiNode/serial/AddNode 30.69
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.64
233 TestMultiNode/serial/CopyFile 9.14
234 TestMultiNode/serial/StopNode 2.12
235 TestMultiNode/serial/StartAfterStop 9.08
236 TestMultiNode/serial/RestartKeepsNodes 79.92
237 TestMultiNode/serial/DeleteNode 5.05
238 TestMultiNode/serial/StopMultiNode 23.78
239 TestMultiNode/serial/RestartMultiNode 50.11
240 TestMultiNode/serial/ValidateNameConflict 25.24
245 TestPreload 103.37
247 TestScheduledStopUnix 99.73
250 TestInsufficientStorage 9.76
251 TestRunningBinaryUpgrade 59.97
253 TestKubernetesUpgrade 351.97
254 TestMissingContainerUpgrade 143.65
255 TestStoppedBinaryUpgrade/Setup 0.49
257 TestPause/serial/Start 52.52
258 TestStoppedBinaryUpgrade/Upgrade 95.8
259 TestPause/serial/SecondStartNoReconfiguration 38.17
260 TestPause/serial/Pause 0.75
261 TestPause/serial/VerifyStatus 0.36
262 TestPause/serial/Unpause 0.65
263 TestPause/serial/PauseAgain 0.73
264 TestPause/serial/DeletePaused 2.76
265 TestPause/serial/VerifyDeletedResources 0.61
266 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
275 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
276 TestNoKubernetes/serial/StartWithK8s 29.34
284 TestNetworkPlugins/group/false 5.01
288 TestNoKubernetes/serial/StartWithStopK8s 11.73
289 TestNoKubernetes/serial/Start 11.7
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
291 TestNoKubernetes/serial/ProfileList 1.93
292 TestNoKubernetes/serial/Stop 1.22
293 TestNoKubernetes/serial/StartNoArgs 6.75
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.4
296 TestStartStop/group/old-k8s-version/serial/FirstStart 129.5
298 TestStartStop/group/no-preload/serial/FirstStart 51.85
299 TestStartStop/group/no-preload/serial/DeployApp 9.24
300 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
301 TestStartStop/group/no-preload/serial/Stop 11.87
302 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
303 TestStartStop/group/no-preload/serial/SecondStart 262.94
304 TestStartStop/group/old-k8s-version/serial/DeployApp 9.39
305 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.8
306 TestStartStop/group/old-k8s-version/serial/Stop 11.92
307 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
308 TestStartStop/group/old-k8s-version/serial/SecondStart 121.52
310 TestStartStop/group/embed-certs/serial/FirstStart 45.7
311 TestStartStop/group/embed-certs/serial/DeployApp 9.23
312 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.97
313 TestStartStop/group/embed-certs/serial/Stop 11.86
314 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
315 TestStartStop/group/embed-certs/serial/SecondStart 285.03
316 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.08
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
319 TestStartStop/group/old-k8s-version/serial/Pause 2.61
321 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 41.47
323 TestStartStop/group/newest-cni/serial/FirstStart 29.03
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.26
325 TestStartStop/group/newest-cni/serial/DeployApp 0
326 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.88
327 TestStartStop/group/newest-cni/serial/Stop 1.21
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.04
329 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
330 TestStartStop/group/newest-cni/serial/SecondStart 13.27
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.94
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.24
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 263.29
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
337 TestStartStop/group/newest-cni/serial/Pause 2.91
338 TestNetworkPlugins/group/auto/Start 45.33
339 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.09
341 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
342 TestStartStop/group/no-preload/serial/Pause 3.13
343 TestNetworkPlugins/group/kindnet/Start 45.71
344 TestNetworkPlugins/group/auto/KubeletFlags 0.28
345 TestNetworkPlugins/group/auto/NetCatPod 10.19
346 TestNetworkPlugins/group/auto/DNS 0.13
347 TestNetworkPlugins/group/auto/Localhost 0.11
348 TestNetworkPlugins/group/auto/HairPin 0.11
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
351 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
352 TestNetworkPlugins/group/calico/Start 55.14
353 TestNetworkPlugins/group/kindnet/DNS 0.15
354 TestNetworkPlugins/group/kindnet/Localhost 0.12
355 TestNetworkPlugins/group/kindnet/HairPin 0.11
356 TestNetworkPlugins/group/custom-flannel/Start 49.25
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.26
359 TestNetworkPlugins/group/calico/NetCatPod 9.19
360 TestNetworkPlugins/group/calico/DNS 0.14
361 TestNetworkPlugins/group/calico/Localhost 0.11
362 TestNetworkPlugins/group/calico/HairPin 0.11
363 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.26
364 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.19
365 TestNetworkPlugins/group/custom-flannel/DNS 0.15
366 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
367 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
368 TestNetworkPlugins/group/enable-default-cni/Start 72.52
369 TestNetworkPlugins/group/flannel/Start 48.37
370 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
371 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
372 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
373 TestStartStop/group/embed-certs/serial/Pause 3.06
374 TestNetworkPlugins/group/bridge/Start 37.45
375 TestNetworkPlugins/group/flannel/ControllerPod 6.01
376 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
377 TestNetworkPlugins/group/flannel/NetCatPod 11.18
378 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
379 TestNetworkPlugins/group/bridge/NetCatPod 10.22
380 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.32
381 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.22
382 TestNetworkPlugins/group/flannel/DNS 0.13
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
384 TestNetworkPlugins/group/flannel/Localhost 0.13
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
386 TestNetworkPlugins/group/flannel/HairPin 0.12
387 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
388 TestNetworkPlugins/group/bridge/DNS 21.24
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
392 TestNetworkPlugins/group/bridge/Localhost 0.13
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.67
394 TestNetworkPlugins/group/bridge/HairPin 0.11
x
+
TestDownloadOnly/v1.20.0/json-events (5.7s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-287298 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-287298 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.702119236s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (5.70s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1204 23:10:45.880288  387894 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1204 23:10:45.880424  387894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-287298
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-287298: exit status 85 (67.89006ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-287298 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |          |
	|         | -p download-only-287298        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:10:40
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:10:40.223454  387906 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:10:40.223568  387906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:40.223573  387906 out.go:358] Setting ErrFile to fd 2...
	I1204 23:10:40.223577  387906 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:40.223786  387906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	W1204 23:10:40.223930  387906 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20045-381016/.minikube/config/config.json: open /home/jenkins/minikube-integration/20045-381016/.minikube/config/config.json: no such file or directory
	I1204 23:10:40.224525  387906 out.go:352] Setting JSON to true
	I1204 23:10:40.225535  387906 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6789,"bootTime":1733347051,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:10:40.225667  387906 start.go:139] virtualization: kvm guest
	I1204 23:10:40.228283  387906 out.go:97] [download-only-287298] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1204 23:10:40.228441  387906 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball: no such file or directory
	I1204 23:10:40.228489  387906 notify.go:220] Checking for updates...
	I1204 23:10:40.230075  387906 out.go:169] MINIKUBE_LOCATION=20045
	I1204 23:10:40.231809  387906 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:10:40.233423  387906 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:10:40.235057  387906 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:10:40.236502  387906 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1204 23:10:40.239399  387906 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 23:10:40.239647  387906 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:10:40.262770  387906 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:10:40.262863  387906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:40.315710  387906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-04 23:10:40.306563823 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:40.315812  387906 docker.go:318] overlay module found
	I1204 23:10:40.317552  387906 out.go:97] Using the docker driver based on user configuration
	I1204 23:10:40.317579  387906 start.go:297] selected driver: docker
	I1204 23:10:40.317586  387906 start.go:901] validating driver "docker" against <nil>
	I1204 23:10:40.317676  387906 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:40.362913  387906 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-04 23:10:40.353398132 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:40.363111  387906 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:10:40.363651  387906 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1204 23:10:40.363815  387906 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 23:10:40.366002  387906 out.go:169] Using Docker driver with root privileges
	I1204 23:10:40.368108  387906 cni.go:84] Creating CNI manager for ""
	I1204 23:10:40.368207  387906 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:10:40.368221  387906 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:10:40.368315  387906 start.go:340] cluster config:
	{Name:download-only-287298 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-287298 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:10:40.370035  387906 out.go:97] Starting "download-only-287298" primary control-plane node in "download-only-287298" cluster
	I1204 23:10:40.370058  387906 cache.go:121] Beginning downloading kic base image for docker with crio
	I1204 23:10:40.371502  387906 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:10:40.371531  387906 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 23:10:40.371676  387906 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:10:40.388823  387906 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:40.389035  387906 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1204 23:10:40.389146  387906 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:40.422858  387906 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1204 23:10:40.422916  387906 cache.go:56] Caching tarball of preloaded images
	I1204 23:10:40.423091  387906 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1204 23:10:40.425142  387906 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1204 23:10:40.425182  387906 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1204 23:10:40.462071  387906 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1204 23:10:43.679261  387906 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1204 23:10:44.382877  387906 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1204 23:10:44.382982  387906 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-287298 host does not exist
	  To start a cluster, run: "minikube start -p download-only-287298"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-287298
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-701357 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-701357 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.552356481s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1204 23:10:51.861451  387894 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1204 23:10:51.861499  387894 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-701357
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-701357: exit status 85 (69.250382ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-287298 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | -p download-only-287298        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| delete  | -p download-only-287298        | download-only-287298 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC | 04 Dec 24 23:10 UTC |
	| start   | -o=json --download-only        | download-only-701357 | jenkins | v1.34.0 | 04 Dec 24 23:10 UTC |                     |
	|         | -p download-only-701357        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/04 23:10:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1204 23:10:46.353719  388248 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:10:46.354018  388248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:46.354031  388248 out.go:358] Setting ErrFile to fd 2...
	I1204 23:10:46.354036  388248 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:10:46.354211  388248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:10:46.354853  388248 out.go:352] Setting JSON to true
	I1204 23:10:46.355895  388248 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6795,"bootTime":1733347051,"procs":169,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:10:46.356013  388248 start.go:139] virtualization: kvm guest
	I1204 23:10:46.358252  388248 out.go:97] [download-only-701357] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:10:46.358483  388248 notify.go:220] Checking for updates...
	I1204 23:10:46.359823  388248 out.go:169] MINIKUBE_LOCATION=20045
	I1204 23:10:46.361419  388248 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:10:46.362871  388248 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:10:46.364410  388248 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:10:46.365823  388248 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1204 23:10:46.368508  388248 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1204 23:10:46.368840  388248 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:10:46.392250  388248 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:10:46.392361  388248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:46.438420  388248 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-04 23:10:46.42942127 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:46.438548  388248 docker.go:318] overlay module found
	I1204 23:10:46.440521  388248 out.go:97] Using the docker driver based on user configuration
	I1204 23:10:46.440554  388248 start.go:297] selected driver: docker
	I1204 23:10:46.440563  388248 start.go:901] validating driver "docker" against <nil>
	I1204 23:10:46.440665  388248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:10:46.488625  388248 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:46 SystemTime:2024-12-04 23:10:46.479904838 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:10:46.488845  388248 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1204 23:10:46.489422  388248 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I1204 23:10:46.489588  388248 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1204 23:10:46.491663  388248 out.go:169] Using Docker driver with root privileges
	I1204 23:10:46.493151  388248 cni.go:84] Creating CNI manager for ""
	I1204 23:10:46.493232  388248 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1204 23:10:46.493249  388248 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1204 23:10:46.493328  388248 start.go:340] cluster config:
	{Name:download-only-701357 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-701357 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:10:46.495065  388248 out.go:97] Starting "download-only-701357" primary control-plane node in "download-only-701357" cluster
	I1204 23:10:46.495100  388248 cache.go:121] Beginning downloading kic base image for docker with crio
	I1204 23:10:46.496658  388248 out.go:97] Pulling base image v0.0.45-1730888964-19917 ...
	I1204 23:10:46.496688  388248 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:10:46.496819  388248 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local docker daemon
	I1204 23:10:46.513371  388248 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 to local cache
	I1204 23:10:46.513525  388248 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory
	I1204 23:10:46.513546  388248 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 in local cache directory, skipping pull
	I1204 23:10:46.513554  388248 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 exists in cache, skipping pull
	I1204 23:10:46.513565  388248 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 as a tarball
	I1204 23:10:46.554459  388248 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:10:46.554494  388248 cache.go:56] Caching tarball of preloaded images
	I1204 23:10:46.554664  388248 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:10:46.556561  388248 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1204 23:10:46.556574  388248 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1204 23:10:46.594387  388248 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1204 23:10:50.409262  388248 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1204 23:10:50.409369  388248 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20045-381016/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1204 23:10:51.156333  388248 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1204 23:10:51.156767  388248 profile.go:143] Saving config to /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/download-only-701357/config.json ...
	I1204 23:10:51.156802  388248 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/download-only-701357/config.json: {Name:mkbd5bb500e71bdd3a26601001253b1119d83b0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1204 23:10:51.156971  388248 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1204 23:10:51.157123  388248 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20045-381016/.minikube/cache/linux/amd64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-701357 host does not exist
	  To start a cluster, run: "minikube start -p download-only-701357"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-701357
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.1s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-758817 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-758817" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-758817
--- PASS: TestDownloadOnlyKic (1.10s)

                                                
                                    
x
+
TestBinaryMirror (0.78s)

                                                
                                                
=== RUN   TestBinaryMirror
I1204 23:10:53.678802  387894 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-223027 --alsologtostderr --binary-mirror http://127.0.0.1:45271 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-223027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-223027
--- PASS: TestBinaryMirror (0.78s)

                                                
                                    
x
+
TestOffline (55.82s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-166718 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-166718 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (53.379777092s)
helpers_test.go:175: Cleaning up "offline-crio-166718" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-166718
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-166718: (2.436376136s)
--- PASS: TestOffline (55.82s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-630093
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-630093: exit status 85 (55.553326ms)

                                                
                                                
-- stdout --
	* Profile "addons-630093" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-630093"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-630093
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-630093: exit status 85 (56.644088ms)

                                                
                                                
-- stdout --
	* Profile "addons-630093" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-630093"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (176.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-630093 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-630093 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m56.556514065s)
--- PASS: TestAddons/Setup (176.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-630093 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-630093 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-630093 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-630093 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3d0679a9-c218-4e74-9877-82142b389b68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3d0679a9-c218-4e74-9877-82142b389b68] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004268488s
addons_test.go:633: (dbg) Run:  kubectl --context addons-630093 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-630093 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-630093 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.47s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 2.530164ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-hxfdr" [b4aeaa23-62f9-4d1d-ba93-e79530728a03] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002709152s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-s54q4" [63f58b93-3d5f-4e3c-856e-74c6e4079acd] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004203811s
addons_test.go:331: (dbg) Run:  kubectl --context addons-630093 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-630093 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-630093 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.724272767s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 ip
2024/12/04 23:14:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.52s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-6fgkw" [00f5c217-241e-40cf-844e-9ea733e99f84] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00471648s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 addons disable inspektor-gadget --alsologtostderr -v=1: (5.727683582s)
--- PASS: TestAddons/parallel/InspektorGadget (10.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.45s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-630093 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-rx7wx" [e3960f18-0356-40e6-b96c-742ec9869093] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-rx7wx" [e3960f18-0356-40e6-b96c-742ec9869093] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004576693s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 addons disable headlamp --alsologtostderr -v=1: (5.659013894s)
--- PASS: TestAddons/parallel/Headlamp (17.45s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-qb868" [bd2ee58a-86d6-4981-ab81-15c06c700604] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003883462s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-rj8jd" [4960e5ae-fa86-4256-ac61-055f4d0adce3] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004692748s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-c5vc5" [c32851eb-7e81-479b-a5f3-1c4a2f5cda81] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003944717s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-630093 addons disable yakd --alsologtostderr -v=1: (5.652163718s)
--- PASS: TestAddons/parallel/Yakd (11.66s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:344: "amd-gpu-device-plugin-xfdff" [b964506a-e0bb-4f8e-a33d-b1583ba8451e] Running
addons_test.go:977: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.004541101s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-630093
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-630093: (11.845006417s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-630093
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-630093
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-630093
--- PASS: TestAddons/StoppedEnableDisable (12.11s)

                                                
                                    
x
+
TestCertOptions (32.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-503293 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-503293 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (28.384480739s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-503293 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-503293 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-503293 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-503293" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-503293
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-503293: (3.405104311s)
--- PASS: TestCertOptions (32.41s)

                                                
                                    
x
+
TestCertExpiration (230.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-113701 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
E1205 00:00:27.714293  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-113701 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (33.426750995s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-113701 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-113701 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (14.74490309s)
helpers_test.go:175: Cleaning up "cert-expiration-113701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-113701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-113701: (2.308717835s)
--- PASS: TestCertExpiration (230.48s)

                                                
                                    
x
+
TestForceSystemdFlag (30.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-733713 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-733713 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (27.420720066s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-733713 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-733713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-733713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-733713: (2.686920012s)
--- PASS: TestForceSystemdFlag (30.47s)

                                                
                                    
x
+
TestForceSystemdEnv (36.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-159420 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-159420 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.039579166s)
helpers_test.go:175: Cleaning up "force-systemd-env-159420" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-159420
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-159420: (5.740080169s)
--- PASS: TestForceSystemdEnv (36.78s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.48s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1205 00:00:22.103175  387894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:22.103379  387894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1205 00:00:22.143859  387894 install.go:62] docker-machine-driver-kvm2: exit status 1
W1205 00:00:22.144319  387894 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 00:00:22.144389  387894 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3285035786/001/docker-machine-driver-kvm2
I1205 00:00:22.410200  387894 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3285035786/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc000517a50 gz:0xc000517a58 tar:0xc000517a00 tar.bz2:0xc000517a10 tar.gz:0xc000517a20 tar.xz:0xc000517a30 tar.zst:0xc000517a40 tbz2:0xc000517a10 tgz:0xc000517a20 txz:0xc000517a30 tzst:0xc000517a40 xz:0xc000517a60 zip:0xc000517a70 zst:0xc000517a68] Getters:map[file:0xc000972580 http:0xc0009e1950 https:0xc0009e19a0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 00:00:22.410273  387894 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3285035786/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (3.48s)

                                                
                                    
x
+
TestErrorSpam/setup (23.44s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-284555 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-284555 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-284555 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-284555 --driver=docker  --container-runtime=crio: (23.442799215s)
--- PASS: TestErrorSpam/setup (23.44s)

                                                
                                    
x
+
TestErrorSpam/start (0.58s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 start --dry-run
--- PASS: TestErrorSpam/start (0.58s)

                                                
                                    
x
+
TestErrorSpam/status (0.88s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 status
--- PASS: TestErrorSpam/status (0.88s)

                                                
                                    
x
+
TestErrorSpam/pause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 pause
--- PASS: TestErrorSpam/pause (1.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (1.37s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 stop: (1.182557019s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-284555 --log_dir /tmp/nospam-284555 stop
--- PASS: TestErrorSpam/stop (1.37s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20045-381016/.minikube/files/etc/test/nested/copy/387894/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (40.27s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217112 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1204 23:23:51.636154  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:51.642614  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:51.654082  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:51.675515  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:51.717097  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:51.798590  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:51.960209  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:52.281971  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:52.923903  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:54.205395  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:23:56.766802  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:24:01.888923  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-217112 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (40.264868592s)
--- PASS: TestFunctional/serial/StartWithProxy (40.27s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.85s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1204 23:24:10.081168  387894 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217112 --alsologtostderr -v=8
E1204 23:24:12.130926  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:24:32.613124  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-217112 --alsologtostderr -v=8: (29.853136955s)
functional_test.go:663: soft start took 29.853908262s for "functional-217112" cluster.
I1204 23:24:39.934939  387894 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (29.85s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-217112 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-217112 cache add registry.k8s.io/pause:3.1: (1.047939287s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-217112 cache add registry.k8s.io/pause:3.3: (1.253061033s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-217112 cache add registry.k8s.io/pause:latest: (1.02059955s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-217112 /tmp/TestFunctionalserialCacheCmdcacheadd_local1188648917/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cache add minikube-local-cache-test:functional-217112
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cache delete minikube-local-cache-test:functional-217112
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-217112
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (271.665219ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 kubectl -- --context functional-217112 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-217112 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.77s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217112 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1204 23:25:13.575545  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-217112 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.766533863s)
functional_test.go:761: restart took 33.766719968s for "functional-217112" cluster.
I1204 23:25:20.910920  387894 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (33.77s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-217112 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.43s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-217112 logs: (1.426690045s)
--- PASS: TestFunctional/serial/LogsCmd (1.43s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 logs --file /tmp/TestFunctionalserialLogsFileCmd3270827303/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-217112 logs --file /tmp/TestFunctionalserialLogsFileCmd3270827303/001/logs.txt: (1.435737264s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.87s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-217112 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-217112
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-217112: exit status 115 (325.906103ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30316 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-217112 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.87s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 config get cpus: exit status 14 (82.59306ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 config get cpus: exit status 14 (59.942165ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-217112 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-217112 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 429208: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-217112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (186.767852ms)

                                                
                                                
-- stdout --
	* [functional-217112] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:25:30.991683  428558 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:25:30.992009  428558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:30.992023  428558 out.go:358] Setting ErrFile to fd 2...
	I1204 23:25:30.992029  428558 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:30.992351  428558 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:25:30.993082  428558 out.go:352] Setting JSON to false
	I1204 23:25:30.994483  428558 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7680,"bootTime":1733347051,"procs":207,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:25:30.994649  428558 start.go:139] virtualization: kvm guest
	I1204 23:25:30.997215  428558 out.go:177] * [functional-217112] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1204 23:25:30.999123  428558 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:25:30.999139  428558 notify.go:220] Checking for updates...
	I1204 23:25:31.002432  428558 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:25:31.004100  428558 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:25:31.006060  428558 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:25:31.007259  428558 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:25:31.008973  428558 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:25:31.011481  428558 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:25:31.012148  428558 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:25:31.041327  428558 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:25:31.041414  428558 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:25:31.100603  428558 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-04 23:25:31.089207243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:25:31.100701  428558 docker.go:318] overlay module found
	I1204 23:25:31.102959  428558 out.go:177] * Using the docker driver based on existing profile
	I1204 23:25:31.104584  428558 start.go:297] selected driver: docker
	I1204 23:25:31.104597  428558 start.go:901] validating driver "docker" against &{Name:functional-217112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-217112 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:25:31.104697  428558 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:25:31.107494  428558 out.go:201] 
	W1204 23:25:31.109494  428558 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1204 23:25:31.111404  428558 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217112 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-217112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-217112 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (191.811506ms)

                                                
                                                
-- stdout --
	* [functional-217112] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:25:30.806127  428418 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:25:30.806315  428418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:30.806344  428418 out.go:358] Setting ErrFile to fd 2...
	I1204 23:25:30.806352  428418 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:25:30.806833  428418 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:25:30.807604  428418 out.go:352] Setting JSON to false
	I1204 23:25:30.809080  428418 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7680,"bootTime":1733347051,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1204 23:25:30.809264  428418 start.go:139] virtualization: kvm guest
	I1204 23:25:30.813440  428418 out.go:177] * [functional-217112] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1204 23:25:30.815098  428418 notify.go:220] Checking for updates...
	I1204 23:25:30.815123  428418 out.go:177]   - MINIKUBE_LOCATION=20045
	I1204 23:25:30.816679  428418 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1204 23:25:30.818100  428418 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1204 23:25:30.819441  428418 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1204 23:25:30.820726  428418 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1204 23:25:30.822259  428418 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1204 23:25:30.824201  428418 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:25:30.824862  428418 driver.go:394] Setting default libvirt URI to qemu:///system
	I1204 23:25:30.850058  428418 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1204 23:25:30.850151  428418 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:25:30.912895  428418 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-12-04 23:25:30.897312633 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:25:30.913074  428418 docker.go:318] overlay module found
	I1204 23:25:30.915528  428418 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1204 23:25:30.917261  428418 start.go:297] selected driver: docker
	I1204 23:25:30.917284  428418 start.go:901] validating driver "docker" against &{Name:functional-217112 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-217112 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1204 23:25:30.917437  428418 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1204 23:25:30.920610  428418 out.go:201] 
	W1204 23:25:30.922290  428418 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1204 23:25:30.923909  428418 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-217112 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-217112 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-49tsm" [d752228d-0232-4e87-9ba4-9965b4c54c32] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-49tsm" [d752228d-0232-4e87-9ba4-9965b4c54c32] Running
2024/12/04 23:25:42 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.005530181s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30588
functional_test.go:1675: http://192.168.49.2:30588: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-49tsm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30588
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.68s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh -n functional-217112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cp functional-217112:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3488559111/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh -n functional-217112 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh -n functional-217112 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/387894/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo cat /etc/test/nested/copy/387894/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/387894.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo cat /etc/ssl/certs/387894.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/387894.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo cat /usr/share/ca-certificates/387894.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3878942.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo cat /etc/ssl/certs/3878942.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3878942.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo cat /usr/share/ca-certificates/3878942.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.53s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-217112 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 ssh "sudo systemctl is-active docker": exit status 1 (250.273252ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 ssh "sudo systemctl is-active containerd": exit status 1 (251.482986ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-217112 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-217112 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-4sch9" [e03dc841-97a9-4327-9f6e-68cf470d17af] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-4sch9" [e03dc841-97a9-4327-9f6e-68cf470d17af] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 9.004140558s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (9.19s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "415.13108ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "61.43337ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "528.277206ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "56.578209ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdany-port941445549/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733354729599664142" to /tmp/TestFunctionalparallelMountCmdany-port941445549/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733354729599664142" to /tmp/TestFunctionalparallelMountCmdany-port941445549/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733354729599664142" to /tmp/TestFunctionalparallelMountCmdany-port941445549/001/test-1733354729599664142
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (458.543875ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 23:25:30.058561  387894 retry.go:31] will retry after 616.618747ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  4 23:25 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  4 23:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  4 23:25 test-1733354729599664142
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh cat /mount-9p/test-1733354729599664142
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-217112 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9038464a-7fb6-4d53-b87e-69b7cec07259] Pending
helpers_test.go:344: "busybox-mount" [9038464a-7fb6-4d53-b87e-69b7cec07259] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9038464a-7fb6-4d53-b87e-69b7cec07259] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9038464a-7fb6-4d53-b87e-69b7cec07259] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004648499s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-217112 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdany-port941445549/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdspecific-port1127917512/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (284.346831ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 23:25:36.904230  387894 retry.go:31] will retry after 525.992232ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdspecific-port1127917512/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 ssh "sudo umount -f /mount-9p": exit status 1 (293.849754ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-217112 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdspecific-port1127917512/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.90s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 service list -o json
functional_test.go:1494: Took "535.490735ms" to run "out/minikube-linux-amd64 -p functional-217112 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32488
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947564579/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947564579/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947564579/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T" /mount1: exit status 1 (486.073562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1204 23:25:39.010671  387894 retry.go:31] will retry after 606.423296ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-217112 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947564579/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947564579/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-217112 /tmp/TestFunctionalparallelMountCmdVerifyCleanup947564579/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32488
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-217112 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-217112 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-217112 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-217112 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 431750: os: process already finished
helpers_test.go:502: unable to terminate pid 431507: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-217112 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217112 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-217112
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20241023-a345ebe4
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217112 image ls --format short --alsologtostderr:
I1204 23:25:48.940249  433919 out.go:345] Setting OutFile to fd 1 ...
I1204 23:25:48.940506  433919 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:48.940515  433919 out.go:358] Setting ErrFile to fd 2...
I1204 23:25:48.940519  433919 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:48.940760  433919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
I1204 23:25:48.941395  433919 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:48.941494  433919 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:48.941842  433919 cli_runner.go:164] Run: docker container inspect functional-217112 --format={{.State.Status}}
I1204 23:25:48.959403  433919 ssh_runner.go:195] Run: systemctl --version
I1204 23:25:48.959498  433919 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217112
I1204 23:25:48.976558  433919 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/functional-217112/id_rsa Username:docker}
I1204 23:25:49.062926  433919 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217112 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| docker.io/kindest/kindnetd              | v20241023-a345ebe4 | 9ca7e41918271 | 95MB   |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-217112  | aaa8e65564768 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217112 image ls --format table --alsologtostderr:
I1204 23:25:49.585748  434085 out.go:345] Setting OutFile to fd 1 ...
I1204 23:25:49.585865  434085 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:49.585872  434085 out.go:358] Setting ErrFile to fd 2...
I1204 23:25:49.585877  434085 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:49.586069  434085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
I1204 23:25:49.586751  434085 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:49.586860  434085 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:49.587239  434085 cli_runner.go:164] Run: docker container inspect functional-217112 --format={{.State.Status}}
I1204 23:25:49.605107  434085 ssh_runner.go:195] Run: systemctl --version
I1204 23:25:49.605160  434085 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217112
I1204 23:25:49.622436  434085 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/functional-217112/id_rsa Username:docker}
I1204 23:25:49.715277  434085 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217112 image ls --format json --alsologtostderr:
[{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5","repoDigests":["docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16","docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770
004098162f0bb96132d"],"repoTags":["docker.io/kindest/kindnetd:v20241023-a345ebe4"],"size":"94958644"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@
sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"aaa8e655647688f22dcf40238a922a7111791d1a19b4043321ae35a452c8e828","repoDigests":["localhost/minikube-local-cache-test@sha256:7ac4fa1ab931f726095984f7
268201dafb962c019fb8e79090bef920d67d672e"],"repoTags":["localhost/minikube-local-cache-test:functional-217112"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9e
a6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["regist
ry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217112 image ls --format json --alsologtostderr:
I1204 23:25:49.369807  434020 out.go:345] Setting OutFile to fd 1 ...
I1204 23:25:49.369950  434020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:49.369959  434020 out.go:358] Setting ErrFile to fd 2...
I1204 23:25:49.369964  434020 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:49.370154  434020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
I1204 23:25:49.370818  434020 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:49.370927  434020 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:49.371281  434020 cli_runner.go:164] Run: docker container inspect functional-217112 --format={{.State.Status}}
I1204 23:25:49.388714  434020 ssh_runner.go:195] Run: systemctl --version
I1204 23:25:49.388774  434020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217112
I1204 23:25:49.406495  434020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/functional-217112/id_rsa Username:docker}
I1204 23:25:49.495318  434020 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217112 image ls --format yaml --alsologtostderr:
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: aaa8e655647688f22dcf40238a922a7111791d1a19b4043321ae35a452c8e828
repoDigests:
- localhost/minikube-local-cache-test@sha256:7ac4fa1ab931f726095984f7268201dafb962c019fb8e79090bef920d67d672e
repoTags:
- localhost/minikube-local-cache-test:functional-217112
size: "3330"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5
repoDigests:
- docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16
- docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d
repoTags:
- docker.io/kindest/kindnetd:v20241023-a345ebe4
size: "94958644"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217112 image ls --format yaml --alsologtostderr:
I1204 23:25:49.151806  433969 out.go:345] Setting OutFile to fd 1 ...
I1204 23:25:49.151928  433969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:49.151936  433969 out.go:358] Setting ErrFile to fd 2...
I1204 23:25:49.151940  433969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:49.152131  433969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
I1204 23:25:49.152785  433969 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:49.152934  433969 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:49.153318  433969 cli_runner.go:164] Run: docker container inspect functional-217112 --format={{.State.Status}}
I1204 23:25:49.170873  433969 ssh_runner.go:195] Run: systemctl --version
I1204 23:25:49.170927  433969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217112
I1204 23:25:49.187243  433969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/functional-217112/id_rsa Username:docker}
I1204 23:25:49.275481  433969 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-217112 ssh pgrep buildkitd: exit status 1 (246.658129ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image build -t localhost/my-image:functional-217112 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-217112 image build -t localhost/my-image:functional-217112 testdata/build --alsologtostderr: (1.600711007s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-217112 image build -t localhost/my-image:functional-217112 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ce19baa535d
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-217112
--> b78ff39d113
Successfully tagged localhost/my-image:functional-217112
b78ff39d113560449b8293dcb861e7935b1d251a4d27c1ccd9cd6ca978e34b38
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-217112 image build -t localhost/my-image:functional-217112 testdata/build --alsologtostderr:
I1204 23:25:50.057402  434228 out.go:345] Setting OutFile to fd 1 ...
I1204 23:25:50.057525  434228 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:50.057535  434228 out.go:358] Setting ErrFile to fd 2...
I1204 23:25:50.057539  434228 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1204 23:25:50.057753  434228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
I1204 23:25:50.058413  434228 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:50.059010  434228 config.go:182] Loaded profile config "functional-217112": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1204 23:25:50.059428  434228 cli_runner.go:164] Run: docker container inspect functional-217112 --format={{.State.Status}}
I1204 23:25:50.077124  434228 ssh_runner.go:195] Run: systemctl --version
I1204 23:25:50.077235  434228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-217112
I1204 23:25:50.094611  434228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/functional-217112/id_rsa Username:docker}
I1204 23:25:50.183388  434228 build_images.go:161] Building image from path: /tmp/build.3403668258.tar
I1204 23:25:50.183468  434228 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1204 23:25:50.192745  434228 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3403668258.tar
I1204 23:25:50.196134  434228 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3403668258.tar: stat -c "%s %y" /var/lib/minikube/build/build.3403668258.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3403668258.tar': No such file or directory
I1204 23:25:50.196174  434228 ssh_runner.go:362] scp /tmp/build.3403668258.tar --> /var/lib/minikube/build/build.3403668258.tar (3072 bytes)
I1204 23:25:50.220129  434228 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3403668258
I1204 23:25:50.229064  434228 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3403668258 -xf /var/lib/minikube/build/build.3403668258.tar
I1204 23:25:50.237860  434228 crio.go:315] Building image: /var/lib/minikube/build/build.3403668258
I1204 23:25:50.237954  434228 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-217112 /var/lib/minikube/build/build.3403668258 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1204 23:25:51.584139  434228 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-217112 /var/lib/minikube/build/build.3403668258 --cgroup-manager=cgroupfs: (1.346153715s)
I1204 23:25:51.584197  434228 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3403668258
I1204 23:25:51.592631  434228 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3403668258.tar
I1204 23:25:51.600274  434228 build_images.go:217] Built localhost/my-image:functional-217112 from /tmp/build.3403668258.tar
I1204 23:25:51.600321  434228 build_images.go:133] succeeded building to: functional-217112
I1204 23:25:51.600326  434228 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image rm kicbase/echo-server:functional-217112 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 update-context --alsologtostderr -v=2
E1204 23:26:35.497783  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-217112 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-217112 tunnel --alsologtostderr] ...
E1204 23:33:51.635364  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-217112
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-217112
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-217112
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (100.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-649825 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-649825 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m40.02821265s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (100.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-649825 -- rollout status deployment/busybox: (2.75326206s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-8qdvn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-spzqt -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-xpdwt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-8qdvn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-spzqt -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-xpdwt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-8qdvn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-spzqt -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-xpdwt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-8qdvn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-8qdvn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-spzqt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-spzqt -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-xpdwt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-649825 -- exec busybox-7dff88458-xpdwt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (35.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-649825 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-649825 -v=7 --alsologtostderr: (35.051348852s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (35.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-649825 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp testdata/cp-test.txt ha-649825:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile888847639/001/cp-test_ha-649825.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825:/home/docker/cp-test.txt ha-649825-m02:/home/docker/cp-test_ha-649825_ha-649825-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m02 "sudo cat /home/docker/cp-test_ha-649825_ha-649825-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825:/home/docker/cp-test.txt ha-649825-m03:/home/docker/cp-test_ha-649825_ha-649825-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m03 "sudo cat /home/docker/cp-test_ha-649825_ha-649825-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825:/home/docker/cp-test.txt ha-649825-m04:/home/docker/cp-test_ha-649825_ha-649825-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m04 "sudo cat /home/docker/cp-test_ha-649825_ha-649825-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp testdata/cp-test.txt ha-649825-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile888847639/001/cp-test_ha-649825-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m02:/home/docker/cp-test.txt ha-649825:/home/docker/cp-test_ha-649825-m02_ha-649825.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825 "sudo cat /home/docker/cp-test_ha-649825-m02_ha-649825.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m02:/home/docker/cp-test.txt ha-649825-m03:/home/docker/cp-test_ha-649825-m02_ha-649825-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m03 "sudo cat /home/docker/cp-test_ha-649825-m02_ha-649825-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m02:/home/docker/cp-test.txt ha-649825-m04:/home/docker/cp-test_ha-649825-m02_ha-649825-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m04 "sudo cat /home/docker/cp-test_ha-649825-m02_ha-649825-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp testdata/cp-test.txt ha-649825-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile888847639/001/cp-test_ha-649825-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m03:/home/docker/cp-test.txt ha-649825:/home/docker/cp-test_ha-649825-m03_ha-649825.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825 "sudo cat /home/docker/cp-test_ha-649825-m03_ha-649825.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m03:/home/docker/cp-test.txt ha-649825-m02:/home/docker/cp-test_ha-649825-m03_ha-649825-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m02 "sudo cat /home/docker/cp-test_ha-649825-m03_ha-649825-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m03:/home/docker/cp-test.txt ha-649825-m04:/home/docker/cp-test_ha-649825-m03_ha-649825-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m04 "sudo cat /home/docker/cp-test_ha-649825-m03_ha-649825-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp testdata/cp-test.txt ha-649825-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile888847639/001/cp-test_ha-649825-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m04:/home/docker/cp-test.txt ha-649825:/home/docker/cp-test_ha-649825-m04_ha-649825.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825 "sudo cat /home/docker/cp-test_ha-649825-m04_ha-649825.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m04:/home/docker/cp-test.txt ha-649825-m02:/home/docker/cp-test_ha-649825-m04_ha-649825-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m02 "sudo cat /home/docker/cp-test_ha-649825-m04_ha-649825-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 cp ha-649825-m04:/home/docker/cp-test.txt ha-649825-m03:/home/docker/cp-test_ha-649825-m04_ha-649825-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 ssh -n ha-649825-m03 "sudo cat /home/docker/cp-test_ha-649825-m04_ha-649825-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-649825 node stop m02 -v=7 --alsologtostderr: (11.855913689s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr: exit status 7 (667.477752ms)

                                                
                                                
-- stdout --
	ha-649825
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-649825-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-649825-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-649825-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:38:44.487952  459802 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:38:44.488097  459802 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:38:44.488106  459802 out.go:358] Setting ErrFile to fd 2...
	I1204 23:38:44.488111  459802 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:38:44.488280  459802 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:38:44.488444  459802 out.go:352] Setting JSON to false
	I1204 23:38:44.488473  459802 mustload.go:65] Loading cluster: ha-649825
	I1204 23:38:44.488625  459802 notify.go:220] Checking for updates...
	I1204 23:38:44.488899  459802 config.go:182] Loaded profile config "ha-649825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:38:44.488921  459802 status.go:174] checking status of ha-649825 ...
	I1204 23:38:44.489340  459802 cli_runner.go:164] Run: docker container inspect ha-649825 --format={{.State.Status}}
	I1204 23:38:44.509805  459802 status.go:371] ha-649825 host status = "Running" (err=<nil>)
	I1204 23:38:44.509834  459802 host.go:66] Checking if "ha-649825" exists ...
	I1204 23:38:44.510117  459802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-649825
	I1204 23:38:44.528793  459802 host.go:66] Checking if "ha-649825" exists ...
	I1204 23:38:44.529197  459802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:38:44.529256  459802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-649825
	I1204 23:38:44.547332  459802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/ha-649825/id_rsa Username:docker}
	I1204 23:38:44.640234  459802 ssh_runner.go:195] Run: systemctl --version
	I1204 23:38:44.644643  459802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:38:44.655258  459802 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:38:44.702587  459802 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:true NGoroutines:72 SystemTime:2024-12-04 23:38:44.69286011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:38:44.703337  459802 kubeconfig.go:125] found "ha-649825" server: "https://192.168.49.254:8443"
	I1204 23:38:44.703375  459802 api_server.go:166] Checking apiserver status ...
	I1204 23:38:44.703417  459802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:38:44.714585  459802 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1520/cgroup
	I1204 23:38:44.724567  459802 api_server.go:182] apiserver freezer: "2:freezer:/docker/67011468df00873d3ba3216e663a1c2f1bfa83300a4cefc90ba9410564d149fa/crio/crio-cacc80e70c31f697ba11a97f1c15cdc1a1154284b414d18b0e544e8542347053"
	I1204 23:38:44.724635  459802 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/67011468df00873d3ba3216e663a1c2f1bfa83300a4cefc90ba9410564d149fa/crio/crio-cacc80e70c31f697ba11a97f1c15cdc1a1154284b414d18b0e544e8542347053/freezer.state
	I1204 23:38:44.733807  459802 api_server.go:204] freezer state: "THAWED"
	I1204 23:38:44.733841  459802 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1204 23:38:44.737774  459802 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1204 23:38:44.737809  459802 status.go:463] ha-649825 apiserver status = Running (err=<nil>)
	I1204 23:38:44.737823  459802 status.go:176] ha-649825 status: &{Name:ha-649825 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:38:44.737845  459802 status.go:174] checking status of ha-649825-m02 ...
	I1204 23:38:44.738184  459802 cli_runner.go:164] Run: docker container inspect ha-649825-m02 --format={{.State.Status}}
	I1204 23:38:44.756268  459802 status.go:371] ha-649825-m02 host status = "Stopped" (err=<nil>)
	I1204 23:38:44.756292  459802 status.go:384] host is not running, skipping remaining checks
	I1204 23:38:44.756298  459802 status.go:176] ha-649825-m02 status: &{Name:ha-649825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:38:44.756322  459802 status.go:174] checking status of ha-649825-m03 ...
	I1204 23:38:44.756570  459802 cli_runner.go:164] Run: docker container inspect ha-649825-m03 --format={{.State.Status}}
	I1204 23:38:44.775422  459802 status.go:371] ha-649825-m03 host status = "Running" (err=<nil>)
	I1204 23:38:44.775475  459802 host.go:66] Checking if "ha-649825-m03" exists ...
	I1204 23:38:44.775893  459802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-649825-m03
	I1204 23:38:44.793766  459802 host.go:66] Checking if "ha-649825-m03" exists ...
	I1204 23:38:44.794066  459802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:38:44.794110  459802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-649825-m03
	I1204 23:38:44.812267  459802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/ha-649825-m03/id_rsa Username:docker}
	I1204 23:38:44.899842  459802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:38:44.911616  459802 kubeconfig.go:125] found "ha-649825" server: "https://192.168.49.254:8443"
	I1204 23:38:44.911648  459802 api_server.go:166] Checking apiserver status ...
	I1204 23:38:44.911684  459802 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:38:44.922160  459802 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1413/cgroup
	I1204 23:38:44.930958  459802 api_server.go:182] apiserver freezer: "2:freezer:/docker/e6c76aa5494791419d60f9eedc84cb78e204cbb8d1e7a7ddfc53146d00da01c9/crio/crio-29a5f0fd7f752b5e43c7e75352a89feed19564b4263044782c23af3e7920af40"
	I1204 23:38:44.931024  459802 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e6c76aa5494791419d60f9eedc84cb78e204cbb8d1e7a7ddfc53146d00da01c9/crio/crio-29a5f0fd7f752b5e43c7e75352a89feed19564b4263044782c23af3e7920af40/freezer.state
	I1204 23:38:44.939492  459802 api_server.go:204] freezer state: "THAWED"
	I1204 23:38:44.939524  459802 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1204 23:38:44.943396  459802 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1204 23:38:44.943421  459802 status.go:463] ha-649825-m03 apiserver status = Running (err=<nil>)
	I1204 23:38:44.943431  459802 status.go:176] ha-649825-m03 status: &{Name:ha-649825-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:38:44.943445  459802 status.go:174] checking status of ha-649825-m04 ...
	I1204 23:38:44.943733  459802 cli_runner.go:164] Run: docker container inspect ha-649825-m04 --format={{.State.Status}}
	I1204 23:38:44.960878  459802 status.go:371] ha-649825-m04 host status = "Running" (err=<nil>)
	I1204 23:38:44.960903  459802 host.go:66] Checking if "ha-649825-m04" exists ...
	I1204 23:38:44.961220  459802 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-649825-m04
	I1204 23:38:44.978479  459802 host.go:66] Checking if "ha-649825-m04" exists ...
	I1204 23:38:44.978799  459802 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:38:44.978849  459802 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-649825-m04
	I1204 23:38:44.997711  459802 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/ha-649825-m04/id_rsa Username:docker}
	I1204 23:38:45.092336  459802 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:38:45.103417  459802 status.go:176] ha-649825-m04 status: &{Name:ha-649825-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 node start m02 -v=7 --alsologtostderr
E1204 23:38:51.635952  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-649825 node start m02 -v=7 --alsologtostderr: (32.734212682s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr: (1.051720597s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.122480068s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-649825 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-649825 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 stop -p ha-649825 -v=7 --alsologtostderr: (36.64523582s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-649825 --wait=true -v=7 --alsologtostderr
E1204 23:40:14.701064  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:27.713412  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:27.719855  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:27.731364  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:27.752740  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:27.794170  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:27.875998  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:28.037269  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:28.359468  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:29.001500  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:30.283382  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:32.845547  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:37.967664  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:40:48.209898  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:41:08.691522  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-649825 --wait=true -v=7 --alsologtostderr: (1m37.349305479s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-649825
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (134.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-649825 node delete m03 -v=7 --alsologtostderr: (10.626915721s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 stop -v=7 --alsologtostderr
E1204 23:41:49.654039  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-649825 stop -v=7 --alsologtostderr: (35.476453195s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr: exit status 7 (110.228119ms)

                                                
                                                
-- stdout --
	ha-649825
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-649825-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-649825-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:42:22.474937  476553 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:42:22.475105  476553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:42:22.475119  476553 out.go:358] Setting ErrFile to fd 2...
	I1204 23:42:22.475125  476553 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:42:22.475361  476553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:42:22.475534  476553 out.go:352] Setting JSON to false
	I1204 23:42:22.475563  476553 mustload.go:65] Loading cluster: ha-649825
	I1204 23:42:22.475633  476553 notify.go:220] Checking for updates...
	I1204 23:42:22.475943  476553 config.go:182] Loaded profile config "ha-649825": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:42:22.475965  476553 status.go:174] checking status of ha-649825 ...
	I1204 23:42:22.476422  476553 cli_runner.go:164] Run: docker container inspect ha-649825 --format={{.State.Status}}
	I1204 23:42:22.496939  476553 status.go:371] ha-649825 host status = "Stopped" (err=<nil>)
	I1204 23:42:22.496977  476553 status.go:384] host is not running, skipping remaining checks
	I1204 23:42:22.496986  476553 status.go:176] ha-649825 status: &{Name:ha-649825 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:42:22.497021  476553 status.go:174] checking status of ha-649825-m02 ...
	I1204 23:42:22.497381  476553 cli_runner.go:164] Run: docker container inspect ha-649825-m02 --format={{.State.Status}}
	I1204 23:42:22.515627  476553 status.go:371] ha-649825-m02 host status = "Stopped" (err=<nil>)
	I1204 23:42:22.515678  476553 status.go:384] host is not running, skipping remaining checks
	I1204 23:42:22.515688  476553 status.go:176] ha-649825-m02 status: &{Name:ha-649825-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:42:22.515713  476553 status.go:174] checking status of ha-649825-m04 ...
	I1204 23:42:22.516077  476553 cli_runner.go:164] Run: docker container inspect ha-649825-m04 --format={{.State.Status}}
	I1204 23:42:22.533069  476553 status.go:371] ha-649825-m04 host status = "Stopped" (err=<nil>)
	I1204 23:42:22.533095  476553 status.go:384] host is not running, skipping remaining checks
	I1204 23:42:22.533101  476553 status.go:176] ha-649825-m04 status: &{Name:ha-649825-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (112.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-649825 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1204 23:43:11.576093  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:43:51.636018  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-649825 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m51.888434854s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (112.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (41.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-649825 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-649825 --control-plane -v=7 --alsologtostderr: (40.658271845s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-649825 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (41.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-649600 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
E1204 23:45:27.714677  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-649600 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (40.89009582s)
--- PASS: TestJSONOutput/start/Command (40.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-649600 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-649600 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-649600 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-649600 --output=json --user=testUser: (5.767644785s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-105856 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-105856 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (73.900585ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fd17cbc5-fa93-4ace-8799-65d1b5ba6ef6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-105856] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a18f0ca8-aad0-4901-8a6d-26c2897eeb03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"3fde8554-6e33-48fb-80bd-0a79581f942f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2a1da5ad-a0df-4a3b-9f84-80b960b34667","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig"}}
	{"specversion":"1.0","id":"94ea5c5e-0949-4a5a-840b-6516399bf665","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube"}}
	{"specversion":"1.0","id":"28fe6368-0157-4793-8d88-34f301ec2563","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"bbb3f527-5926-4849-b378-bb420eae4a65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"91802e4d-dcca-46b0-b329-da5cabed33a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-105856" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-105856
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.84s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-653803 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-653803 --network=: (25.787332168s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-653803" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-653803
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-653803: (2.038638734s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.84s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-683654 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-683654 --network=bridge: (23.643421511s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-683654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-683654
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-683654: (1.898751547s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.56s)

                                                
                                    
x
+
TestKicExistingNetwork (22.94s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1204 23:46:52.409769  387894 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1204 23:46:52.426219  387894 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1204 23:46:52.426288  387894 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1204 23:46:52.426317  387894 cli_runner.go:164] Run: docker network inspect existing-network
W1204 23:46:52.443362  387894 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1204 23:46:52.443403  387894 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1204 23:46:52.443419  387894 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1204 23:46:52.443564  387894 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1204 23:46:52.461414  387894 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9a8dc337d53c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:86:b7:cb:c6} reservation:<nil>}
I1204 23:46:52.461917  387894 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000c15aa0}
I1204 23:46:52.461943  387894 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1204 23:46:52.461983  387894 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1204 23:46:52.526771  387894 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-799453 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-799453 --network=existing-network: (20.913201957s)
helpers_test.go:175: Cleaning up "existing-network-799453" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-799453
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-799453: (1.877487158s)
I1204 23:47:15.335721  387894 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.94s)

                                                
                                    
x
+
TestKicCustomSubnet (25.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-816226 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-816226 --subnet=192.168.60.0/24: (23.715087539s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-816226 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-816226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-816226
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-816226: (2.082642654s)
--- PASS: TestKicCustomSubnet (25.82s)

                                                
                                    
x
+
TestKicStaticIP (24.06s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-347433 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-347433 --static-ip=192.168.200.200: (21.823966975s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-347433 ip
helpers_test.go:175: Cleaning up "static-ip-347433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-347433
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-347433: (2.101641409s)
--- PASS: TestKicStaticIP (24.06s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (47.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-031082 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-031082 --driver=docker  --container-runtime=crio: (21.106433496s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-047469 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-047469 --driver=docker  --container-runtime=crio: (21.018602724s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-031082
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-047469
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-047469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-047469
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-047469: (1.869145221s)
helpers_test.go:175: Cleaning up "first-031082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-031082
E1204 23:48:51.635073  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-031082: (2.265780526s)
--- PASS: TestMinikubeProfile (47.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-996098 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-996098 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.206309073s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-996098 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-014931 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-014931 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.22722112s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014931 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-996098 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-996098 --alsologtostderr -v=5: (1.608062479s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014931 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-014931
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-014931: (1.182711832s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.32s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-014931
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-014931: (6.323652271s)
--- PASS: TestMountStart/serial/RestartStopped (7.32s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-014931 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (73.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558690 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1204 23:50:27.713648  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-558690 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m13.220355275s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (73.67s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-558690 -- rollout status deployment/busybox: (2.286754179s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-h5vv8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-z2v82 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-h5vv8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-z2v82 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-h5vv8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-z2v82 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-h5vv8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-h5vv8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-z2v82 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-558690 -- exec busybox-7dff88458-z2v82 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.76s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (30.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-558690 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-558690 -v 3 --alsologtostderr: (30.083721787s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (30.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-558690 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp testdata/cp-test.txt multinode-558690:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile350092428/001/cp-test_multinode-558690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690:/home/docker/cp-test.txt multinode-558690-m02:/home/docker/cp-test_multinode-558690_multinode-558690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m02 "sudo cat /home/docker/cp-test_multinode-558690_multinode-558690-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690:/home/docker/cp-test.txt multinode-558690-m03:/home/docker/cp-test_multinode-558690_multinode-558690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m03 "sudo cat /home/docker/cp-test_multinode-558690_multinode-558690-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp testdata/cp-test.txt multinode-558690-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile350092428/001/cp-test_multinode-558690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690-m02:/home/docker/cp-test.txt multinode-558690:/home/docker/cp-test_multinode-558690-m02_multinode-558690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690 "sudo cat /home/docker/cp-test_multinode-558690-m02_multinode-558690.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690-m02:/home/docker/cp-test.txt multinode-558690-m03:/home/docker/cp-test_multinode-558690-m02_multinode-558690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m03 "sudo cat /home/docker/cp-test_multinode-558690-m02_multinode-558690-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp testdata/cp-test.txt multinode-558690-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile350092428/001/cp-test_multinode-558690-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690-m03:/home/docker/cp-test.txt multinode-558690:/home/docker/cp-test_multinode-558690-m03_multinode-558690.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690 "sudo cat /home/docker/cp-test_multinode-558690-m03_multinode-558690.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 cp multinode-558690-m03:/home/docker/cp-test.txt multinode-558690-m02:/home/docker/cp-test_multinode-558690-m03_multinode-558690-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 ssh -n multinode-558690-m02 "sudo cat /home/docker/cp-test_multinode-558690-m03_multinode-558690-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.14s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-558690 node stop m03: (1.179170586s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-558690 status: exit status 7 (468.465521ms)

                                                
                                                
-- stdout --
	multinode-558690
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-558690-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-558690-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-558690 status --alsologtostderr: exit status 7 (467.847729ms)

                                                
                                                
-- stdout --
	multinode-558690
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-558690-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-558690-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:51:22.444781  542564 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:51:22.444919  542564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:51:22.444929  542564 out.go:358] Setting ErrFile to fd 2...
	I1204 23:51:22.444935  542564 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:51:22.445144  542564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:51:22.445370  542564 out.go:352] Setting JSON to false
	I1204 23:51:22.445414  542564 mustload.go:65] Loading cluster: multinode-558690
	I1204 23:51:22.445525  542564 notify.go:220] Checking for updates...
	I1204 23:51:22.445992  542564 config.go:182] Loaded profile config "multinode-558690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:51:22.446022  542564 status.go:174] checking status of multinode-558690 ...
	I1204 23:51:22.446563  542564 cli_runner.go:164] Run: docker container inspect multinode-558690 --format={{.State.Status}}
	I1204 23:51:22.465708  542564 status.go:371] multinode-558690 host status = "Running" (err=<nil>)
	I1204 23:51:22.465741  542564 host.go:66] Checking if "multinode-558690" exists ...
	I1204 23:51:22.466017  542564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-558690
	I1204 23:51:22.483890  542564 host.go:66] Checking if "multinode-558690" exists ...
	I1204 23:51:22.484212  542564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:51:22.484251  542564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-558690
	I1204 23:51:22.502733  542564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/multinode-558690/id_rsa Username:docker}
	I1204 23:51:22.591900  542564 ssh_runner.go:195] Run: systemctl --version
	I1204 23:51:22.595949  542564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:51:22.606436  542564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1204 23:51:22.653897  542564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-12-04 23:51:22.644398497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1204 23:51:22.654815  542564 kubeconfig.go:125] found "multinode-558690" server: "https://192.168.67.2:8443"
	I1204 23:51:22.654856  542564 api_server.go:166] Checking apiserver status ...
	I1204 23:51:22.654912  542564 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1204 23:51:22.665400  542564 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1472/cgroup
	I1204 23:51:22.674493  542564 api_server.go:182] apiserver freezer: "2:freezer:/docker/fa0f8cbf0367447c8cdff0ae2c55caf8e36328ba6a97bc8869a7281b594f8a04/crio/crio-550d1f5915cc9d2b974ffde8f8c9d2f69ef0c1a9742ae4f5d76618b8e2c3d345"
	I1204 23:51:22.674552  542564 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/fa0f8cbf0367447c8cdff0ae2c55caf8e36328ba6a97bc8869a7281b594f8a04/crio/crio-550d1f5915cc9d2b974ffde8f8c9d2f69ef0c1a9742ae4f5d76618b8e2c3d345/freezer.state
	I1204 23:51:22.682451  542564 api_server.go:204] freezer state: "THAWED"
	I1204 23:51:22.682478  542564 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1204 23:51:22.686905  542564 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1204 23:51:22.686930  542564 status.go:463] multinode-558690 apiserver status = Running (err=<nil>)
	I1204 23:51:22.686940  542564 status.go:176] multinode-558690 status: &{Name:multinode-558690 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:51:22.686959  542564 status.go:174] checking status of multinode-558690-m02 ...
	I1204 23:51:22.687242  542564 cli_runner.go:164] Run: docker container inspect multinode-558690-m02 --format={{.State.Status}}
	I1204 23:51:22.704261  542564 status.go:371] multinode-558690-m02 host status = "Running" (err=<nil>)
	I1204 23:51:22.704285  542564 host.go:66] Checking if "multinode-558690-m02" exists ...
	I1204 23:51:22.704558  542564 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-558690-m02
	I1204 23:51:22.721926  542564 host.go:66] Checking if "multinode-558690-m02" exists ...
	I1204 23:51:22.722198  542564 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1204 23:51:22.722244  542564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-558690-m02
	I1204 23:51:22.739682  542564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/20045-381016/.minikube/machines/multinode-558690-m02/id_rsa Username:docker}
	I1204 23:51:22.831965  542564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1204 23:51:22.842544  542564 status.go:176] multinode-558690-m02 status: &{Name:multinode-558690-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:51:22.842590  542564 status.go:174] checking status of multinode-558690-m03 ...
	I1204 23:51:22.842870  542564 cli_runner.go:164] Run: docker container inspect multinode-558690-m03 --format={{.State.Status}}
	I1204 23:51:22.859846  542564 status.go:371] multinode-558690-m03 host status = "Stopped" (err=<nil>)
	I1204 23:51:22.859877  542564 status.go:384] host is not running, skipping remaining checks
	I1204 23:51:22.859896  542564 status.go:176] multinode-558690-m03 status: &{Name:multinode-558690-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.12s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-558690 node start m03 -v=7 --alsologtostderr: (8.413669679s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-558690
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-558690
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-558690: (24.759149128s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558690 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-558690 --wait=true -v=8 --alsologtostderr: (55.048608079s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-558690
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.92s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-558690 node delete m03: (4.461357629s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-558690 stop: (23.599768164s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-558690 status: exit status 7 (88.758046ms)

                                                
                                                
-- stdout --
	multinode-558690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-558690-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-558690 status --alsologtostderr: exit status 7 (88.455417ms)

                                                
                                                
-- stdout --
	multinode-558690
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-558690-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1204 23:53:20.644149  551859 out.go:345] Setting OutFile to fd 1 ...
	I1204 23:53:20.644262  551859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:53:20.644269  551859 out.go:358] Setting ErrFile to fd 2...
	I1204 23:53:20.644275  551859 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1204 23:53:20.644475  551859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1204 23:53:20.644713  551859 out.go:352] Setting JSON to false
	I1204 23:53:20.644747  551859 mustload.go:65] Loading cluster: multinode-558690
	I1204 23:53:20.644851  551859 notify.go:220] Checking for updates...
	I1204 23:53:20.645264  551859 config.go:182] Loaded profile config "multinode-558690": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1204 23:53:20.645288  551859 status.go:174] checking status of multinode-558690 ...
	I1204 23:53:20.645739  551859 cli_runner.go:164] Run: docker container inspect multinode-558690 --format={{.State.Status}}
	I1204 23:53:20.664957  551859 status.go:371] multinode-558690 host status = "Stopped" (err=<nil>)
	I1204 23:53:20.664981  551859 status.go:384] host is not running, skipping remaining checks
	I1204 23:53:20.664987  551859 status.go:176] multinode-558690 status: &{Name:multinode-558690 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1204 23:53:20.665022  551859 status.go:174] checking status of multinode-558690-m02 ...
	I1204 23:53:20.665279  551859 cli_runner.go:164] Run: docker container inspect multinode-558690-m02 --format={{.State.Status}}
	I1204 23:53:20.682200  551859 status.go:371] multinode-558690-m02 host status = "Stopped" (err=<nil>)
	I1204 23:53:20.682223  551859 status.go:384] host is not running, skipping remaining checks
	I1204 23:53:20.682229  551859 status.go:176] multinode-558690-m02 status: &{Name:multinode-558690-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.78s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558690 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1204 23:53:51.635475  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-558690 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (49.535107441s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-558690 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.11s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-558690
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558690-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-558690-m02 --driver=docker  --container-runtime=crio: exit status 14 (71.695266ms)

                                                
                                                
-- stdout --
	* [multinode-558690-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-558690-m02' is duplicated with machine name 'multinode-558690-m02' in profile 'multinode-558690'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-558690-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-558690-m03 --driver=docker  --container-runtime=crio: (22.973617056s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-558690
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-558690: exit status 80 (281.73023ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-558690 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-558690-m03 already exists in multinode-558690-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-558690-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-558690-m03: (1.861436669s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.24s)

                                                
                                    
x
+
TestPreload (103.37s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-188960 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1204 23:55:27.714046  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-188960 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m17.822665938s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-188960 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-188960 image pull gcr.io/k8s-minikube/busybox: (1.171675351s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-188960
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-188960: (5.670198515s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-188960 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-188960 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (16.266585678s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-188960 image list
helpers_test.go:175: Cleaning up "test-preload-188960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-188960
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-188960: (2.214849289s)
--- PASS: TestPreload (103.37s)

                                                
                                    
x
+
TestScheduledStopUnix (99.73s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-794733 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-794733 --memory=2048 --driver=docker  --container-runtime=crio: (23.966126615s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794733 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-794733 -n scheduled-stop-794733
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794733 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1204 23:56:47.713753  387894 retry.go:31] will retry after 100.413µs: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.715017  387894 retry.go:31] will retry after 187.869µs: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.716194  387894 retry.go:31] will retry after 253.665µs: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.717353  387894 retry.go:31] will retry after 175.179µs: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.718505  387894 retry.go:31] will retry after 377.082µs: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.719645  387894 retry.go:31] will retry after 788.031µs: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.720783  387894 retry.go:31] will retry after 1.683567ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.722975  387894 retry.go:31] will retry after 1.919456ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.725236  387894 retry.go:31] will retry after 2.294267ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.728490  387894 retry.go:31] will retry after 3.637639ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.732704  387894 retry.go:31] will retry after 2.916227ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.735942  387894 retry.go:31] will retry after 5.864558ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.742180  387894 retry.go:31] will retry after 17.49415ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.760450  387894 retry.go:31] will retry after 24.987208ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
I1204 23:56:47.785727  387894 retry.go:31] will retry after 36.510148ms: open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/scheduled-stop-794733/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794733 --cancel-scheduled
E1204 23:56:50.779856  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
E1204 23:56:54.702787  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794733 -n scheduled-stop-794733
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-794733
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-794733 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-794733
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-794733: exit status 7 (82.710252ms)

                                                
                                                
-- stdout --
	scheduled-stop-794733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794733 -n scheduled-stop-794733
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-794733 -n scheduled-stop-794733: exit status 7 (72.541157ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-794733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-794733
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-794733: (4.405424512s)
--- PASS: TestScheduledStopUnix (99.73s)

                                                
                                    
x
+
TestInsufficientStorage (9.76s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-132033 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-132033 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.41152246s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e0328fdb-0e69-48f1-b429-055a0de30a05","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-132033] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"954469a7-f6e9-4a6f-8274-dff57eeeee73","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20045"}}
	{"specversion":"1.0","id":"03ae2952-aca9-4012-8f70-3ed4033ea3bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fd335446-91ab-4cd6-b1b5-ed23095500a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig"}}
	{"specversion":"1.0","id":"ca088010-3126-47d9-b897-13daf341aaf5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube"}}
	{"specversion":"1.0","id":"51e7880b-02eb-43ad-a867-9928b2d4ab11","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"ac31a6e9-bc82-482e-ad1c-8669dc316c64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"673dd74f-5d35-40f4-9e60-95fbbc4c2798","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"803b6acf-694d-4e70-887a-bb92657dd959","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"50bf70c5-ef1c-499d-ae02-6e9088b2b85d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"526d3f86-d663-4132-83cf-20bd1e2c4e2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2ea824bf-4566-45de-a0c2-c77f7fcb086c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-132033\" primary control-plane node in \"insufficient-storage-132033\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"101a2722-2d44-49ac-be7a-6ac564e88177","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1730888964-19917 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9e16d6ca-a9bb-4403-9eb8-4990852247d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"44e26621-9320-47fe-a787-5d216f90d954","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-132033 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-132033 --output=json --layout=cluster: exit status 7 (260.056781ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-132033","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-132033","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 23:58:10.726042  574111 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-132033" does not appear in /home/jenkins/minikube-integration/20045-381016/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-132033 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-132033 --output=json --layout=cluster: exit status 7 (262.899924ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-132033","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-132033","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1204 23:58:10.990348  574211 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-132033" does not appear in /home/jenkins/minikube-integration/20045-381016/kubeconfig
	E1204 23:58:11.000196  574211 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/insufficient-storage-132033/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-132033" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-132033
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-132033: (1.822153439s)
--- PASS: TestInsufficientStorage (9.76s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (59.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.864336068 start -p running-upgrade-626625 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.864336068 start -p running-upgrade-626625 --memory=2200 --vm-driver=docker  --container-runtime=crio: (32.631310093s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-626625 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-626625 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.233464587s)
helpers_test.go:175: Cleaning up "running-upgrade-626625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-626625
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-626625: (2.641447605s)
--- PASS: TestRunningBinaryUpgrade (59.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (351.97s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-204182 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-204182 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.246416657s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-204182
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-204182: (1.261993403s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-204182 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-204182 status --format={{.Host}}: exit status 7 (75.357031ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-204182 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-204182 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.732798171s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-204182 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-204182 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-204182 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (76.428725ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-204182] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-204182
	    minikube start -p kubernetes-upgrade-204182 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2041822 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-204182 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-204182 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-204182 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.085557235s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-204182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-204182
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-204182: (2.416688784s)
--- PASS: TestKubernetesUpgrade (351.97s)

                                                
                                    
x
+
TestMissingContainerUpgrade (143.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3899244841 start -p missing-upgrade-211395 --memory=2200 --driver=docker  --container-runtime=crio
E1204 23:58:51.635955  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3899244841 start -p missing-upgrade-211395 --memory=2200 --driver=docker  --container-runtime=crio: (1m8.859122821s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-211395
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-211395: (14.756494351s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-211395
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-211395 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-211395 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (56.959880213s)
helpers_test.go:175: Cleaning up "missing-upgrade-211395" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-211395
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-211395: (2.176538161s)
--- PASS: TestMissingContainerUpgrade (143.65s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.49s)

                                                
                                    
x
+
TestPause/serial/Start (52.52s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-171855 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-171855 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (52.517323776s)
--- PASS: TestPause/serial/Start (52.52s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (95.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1192649752 start -p stopped-upgrade-193388 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1192649752 start -p stopped-upgrade-193388 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m7.725125883s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1192649752 -p stopped-upgrade-193388 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1192649752 -p stopped-upgrade-193388 stop: (2.385643397s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-193388 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-193388 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.688170602s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (95.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-171855 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-171855 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (38.151159612s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-171855 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-171855 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-171855 --output=json --layout=cluster: exit status 2 (357.096184ms)

                                                
                                                
-- stdout --
	{"Name":"pause-171855","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-171855","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.65s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-171855 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.65s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.73s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-171855 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.73s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-171855 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-171855 --alsologtostderr -v=5: (2.758414319s)
--- PASS: TestPause/serial/DeletePaused (2.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-171855
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-171855: exit status 1 (17.2046ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-171855: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.61s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-193388
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564352 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-564352 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (81.210758ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-564352] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564352 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-564352 --driver=docker  --container-runtime=crio: (29.014433742s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-564352 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-975631 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-975631 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (237.339563ms)

                                                
                                                
-- stdout --
	* [false-975631] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20045
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 00:00:12.372168  604760 out.go:345] Setting OutFile to fd 1 ...
	I1205 00:00:12.372296  604760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:00:12.372306  604760 out.go:358] Setting ErrFile to fd 2...
	I1205 00:00:12.372310  604760 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 00:00:12.372548  604760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20045-381016/.minikube/bin
	I1205 00:00:12.373160  604760 out.go:352] Setting JSON to false
	I1205 00:00:12.374774  604760 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9761,"bootTime":1733347051,"procs":323,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 00:00:12.374949  604760 start.go:139] virtualization: kvm guest
	I1205 00:00:12.378135  604760 out.go:177] * [false-975631] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 00:00:12.379682  604760 notify.go:220] Checking for updates...
	I1205 00:00:12.380211  604760 out.go:177]   - MINIKUBE_LOCATION=20045
	I1205 00:00:12.381634  604760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 00:00:12.383016  604760 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20045-381016/kubeconfig
	I1205 00:00:12.384557  604760 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20045-381016/.minikube
	I1205 00:00:12.386079  604760 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 00:00:12.387516  604760 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 00:00:12.389347  604760 config.go:182] Loaded profile config "NoKubernetes-564352": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:00:12.389449  604760 config.go:182] Loaded profile config "force-systemd-env-159420": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 00:00:12.389520  604760 config.go:182] Loaded profile config "missing-upgrade-211395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1205 00:00:12.389611  604760 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 00:00:12.428150  604760 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I1205 00:00:12.428315  604760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1205 00:00:12.505942  604760 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:75 SystemTime:2024-12-05 00:00:12.491510153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1071-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647927296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1205 00:00:12.506094  604760 docker.go:318] overlay module found
	I1205 00:00:12.507923  604760 out.go:177] * Using the docker driver based on user configuration
	I1205 00:00:12.509210  604760 start.go:297] selected driver: docker
	I1205 00:00:12.509227  604760 start.go:901] validating driver "docker" against <nil>
	I1205 00:00:12.509277  604760 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 00:00:12.512158  604760 out.go:201] 
	W1205 00:00:12.514176  604760 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 00:00:12.515453  604760 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-975631 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-975631" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Dec 2024 23:59:21 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-211395
contexts:
- context:
cluster: missing-upgrade-211395
extensions:
- extension:
last-update: Wed, 04 Dec 2024 23:59:21 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-211395
name: missing-upgrade-211395
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-211395
user:
client-certificate: /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/missing-upgrade-211395/client.crt
client-key: /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/missing-upgrade-211395/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-975631

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-975631"

                                                
                                                
----------------------- debugLogs end: false-975631 [took: 4.583224867s] --------------------------------
helpers_test.go:175: Cleaning up "false-975631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-975631
--- PASS: TestNetworkPlugins/group/false (5.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564352 --no-kubernetes --driver=docker  --container-runtime=crio
I1205 00:00:24.095159  387894 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 00:00:24.095260  387894 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1205 00:00:24.128687  387894 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1205 00:00:24.128724  387894 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1205 00:00:24.128797  387894 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 00:00:24.128831  387894 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3285035786/002/docker-machine-driver-kvm2
I1205 00:00:24.317913  387894 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3285035786/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc000517a50 gz:0xc000517a58 tar:0xc000517a00 tar.bz2:0xc000517a10 tar.gz:0xc000517a20 tar.xz:0xc000517a30 tar.zst:0xc000517a40 tbz2:0xc000517a10 tgz:0xc000517a20 txz:0xc000517a30 tzst:0xc000517a40 xz:0xc000517a60 zip:0xc000517a70 zst:0xc000517a68] Getters:map[file:0xc000973e10 http:0xc0008c9860 https:0xc0008c98b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 00:00:24.317986  387894 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3285035786/002/docker-machine-driver-kvm2
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-564352 --no-kubernetes --driver=docker  --container-runtime=crio: (7.656648708s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-564352 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-564352 status -o json: exit status 2 (353.055621ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-564352","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-564352
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-564352: (3.720552776s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (11.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564352 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-564352 --no-kubernetes --driver=docker  --container-runtime=crio: (11.695157547s)
--- PASS: TestNoKubernetes/serial/Start (11.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-564352 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-564352 "sudo systemctl is-active --quiet service kubelet": exit status 1 (321.681069ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.93s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.03997926s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-564352
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-564352: (1.222318148s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-564352 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-564352 --driver=docker  --container-runtime=crio: (6.753124529s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-564352 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-564352 "sudo systemctl is-active --quiet service kubelet": exit status 1 (403.762687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (129.5s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-404540 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-404540 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m9.501008824s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (129.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-169740 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-169740 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (51.852942548s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-169740 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [457ec5a0-aaab-405b-b789-8f047debfad6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [457ec5a0-aaab-405b-b789-8f047debfad6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004665801s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-169740 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-169740 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-169740 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-169740 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-169740 --alsologtostderr -v=3: (11.87469027s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-169740 -n no-preload-169740
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-169740 -n no-preload-169740: exit status 7 (73.659041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-169740 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (262.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-169740 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-169740 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.593066158s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-169740 -n no-preload-169740
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (262.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-404540 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc8f9ff5-cecc-4e87-b38e-57e9f1c2b887] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [bc8f9ff5-cecc-4e87-b38e-57e9f1c2b887] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003740026s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-404540 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-404540 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-404540 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-404540 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-404540 --alsologtostderr -v=3: (11.9209048s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-404540 -n old-k8s-version-404540
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-404540 -n old-k8s-version-404540: exit status 7 (71.828611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-404540 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (121.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-404540 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1205 00:03:51.635414  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-404540 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m1.134903451s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-404540 -n old-k8s-version-404540
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (121.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-671379 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-671379 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (45.696810716s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-671379 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f6f62d75-1bc6-4f48-b0c2-0dfd18ff3d4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f6f62d75-1bc6-4f48-b0c2-0dfd18ff3d4e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003905554s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-671379 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-671379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-671379 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-671379 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-671379 --alsologtostderr -v=3: (11.863959813s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-671379 -n embed-certs-671379
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-671379 -n embed-certs-671379: exit status 7 (82.909974ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-671379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (285.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-671379 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 00:05:27.714137  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-671379 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m44.680044695s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-671379 -n embed-certs-671379
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (285.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-f7w44" [739c43a8-4c54-4a96-8869-300c65f07226] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005400679s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-f7w44" [739c43a8-4c54-4a96-8869-300c65f07226] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004458185s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-404540 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-404540 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-404540 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-404540 -n old-k8s-version-404540
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-404540 -n old-k8s-version-404540: exit status 2 (295.151872ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-404540 -n old-k8s-version-404540
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-404540 -n old-k8s-version-404540: exit status 2 (299.937438ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-404540 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-404540 -n old-k8s-version-404540
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-404540 -n old-k8s-version-404540
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-496518 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-496518 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (41.474388291s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (41.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (29.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-399630 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-399630 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (29.033851635s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (29.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-496518 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9f2364cd-db40-4f9b-992c-213ce78cdd9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9f2364cd-db40-4f9b-992c-213ce78cdd9e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004384595s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-496518 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-399630 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-399630 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-399630 --alsologtostderr -v=3: (1.209521626s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-496518 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-496518 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-399630 -n newest-cni-399630
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-399630 -n newest-cni-399630: exit status 7 (83.434797ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-399630 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-399630 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-399630 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (12.736606984s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-399630 -n newest-cni-399630
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-496518 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-496518 --alsologtostderr -v=3: (11.944844207s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518: exit status 7 (85.420266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-496518 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-496518 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-496518 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m22.915916487s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (263.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-399630 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-399630 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-399630 -n newest-cni-399630
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-399630 -n newest-cni-399630: exit status 2 (301.293253ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-399630 -n newest-cni-399630
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-399630 -n newest-cni-399630: exit status 2 (324.083233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-399630 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-399630 -n newest-cni-399630
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-399630 -n newest-cni-399630
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (45.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (45.331410088s)
--- PASS: TestNetworkPlugins/group/auto/Start (45.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-99rtj" [7606c377-c34d-47cb-99da-7f4fab09ba90] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003924254s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-99rtj" [7606c377-c34d-47cb-99da-7f4fab09ba90] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004179348s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-169740 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-169740 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-169740 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-169740 -n no-preload-169740
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-169740 -n no-preload-169740: exit status 2 (351.362349ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-169740 -n no-preload-169740
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-169740 -n no-preload-169740: exit status 2 (343.23297ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-169740 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-169740 -n no-preload-169740
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-169740 -n no-preload-169740
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.714691487s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.71s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-975631 "pgrep -a kubelet"
I1205 00:07:53.184682  387894 config.go:182] Loaded profile config "auto-975631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-975631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5jz8j" [545b872b-615f-4a1f-bec8-06151182544b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5jz8j" [545b872b-615f-4a1f-bec8-06151182544b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004596659s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-975631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-rg2kf" [40b915cb-bd79-4a00-a778-e7b9f3a9a375] Running
E1205 00:08:16.702191  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:16.708645  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:16.720234  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:16.741692  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:16.783126  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:16.864672  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:17.025949  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:17.347746  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:17.989240  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004314302s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-975631 "pgrep -a kubelet"
I1205 00:08:18.469063  387894 config.go:182] Loaded profile config "kindnet-975631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-975631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w9vkf" [5a98a290-944f-42eb-83a4-f2cf0d4c3a0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 00:08:19.271480  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-w9vkf" [5a98a290-944f-42eb-83a4-f2cf0d4c3a0b] Running
E1205 00:08:26.954683  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004159651s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (55.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (55.142240844s)
--- PASS: TestNetworkPlugins/group/calico/Start (55.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-975631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1205 00:08:51.635182  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/addons-630093/client.crt: no such file or directory" logger="UnhandledError"
E1205 00:08:57.678575  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (49.254417082s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-cx8g6" [147b6d90-8a00-470c-8c74-6e710d7f1f36] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004952597s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-975631 "pgrep -a kubelet"
I1205 00:09:23.760650  387894 config.go:182] Loaded profile config "calico-975631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-975631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4c67z" [90294a77-5584-4ffb-bd52-45cefe580f30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4c67z" [90294a77-5584-4ffb-bd52-45cefe580f30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004529272s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-975631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-975631 "pgrep -a kubelet"
I1205 00:09:37.892274  387894 config.go:182] Loaded profile config "custom-flannel-975631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-975631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-b4trr" [6c26f385-6468-48b3-97ef-840c1ebfc805] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1205 00:09:38.639941  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-b4trr" [6c26f385-6468-48b3-97ef-840c1ebfc805] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004283887s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-975631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m12.516031379s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.373001714s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7smx6" [9edcaf7d-6505-44a5-bb0a-d6c4e49cb3db] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003704177s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7smx6" [9edcaf7d-6505-44a5-bb0a-d6c4e49cb3db] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004304135s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-671379 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-671379 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-671379 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-671379 -n embed-certs-671379
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-671379 -n embed-certs-671379: exit status 2 (358.888963ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-671379 -n embed-certs-671379
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-671379 -n embed-certs-671379: exit status 2 (354.49428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-671379 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-671379 -n embed-certs-671379
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-671379 -n embed-certs-671379
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1205 00:10:27.714236  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/functional-217112/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-975631 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.453246052s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bktv9" [e3a69ab5-525d-4de0-8a84-0b46639f75da] Running
E1205 00:11:00.561496  387894 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/old-k8s-version-404540/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004903185s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-975631 "pgrep -a kubelet"
I1205 00:11:03.408802  387894 config.go:182] Loaded profile config "flannel-975631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-975631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-k6z6f" [2b4e0a02-6f35-4915-a2d8-4fa3a5a95786] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-k6z6f" [2b4e0a02-6f35-4915-a2d8-4fa3a5a95786] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004628532s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-975631 "pgrep -a kubelet"
I1205 00:11:04.829300  387894 config.go:182] Loaded profile config "bridge-975631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-975631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hmrqg" [40aab358-5ec0-4469-b4a2-ee1aafd8cce3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hmrqg" [40aab358-5ec0-4469-b4a2-ee1aafd8cce3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004694373s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-975631 "pgrep -a kubelet"
I1205 00:11:05.455492  387894 config.go:182] Loaded profile config "enable-default-cni-975631": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-975631 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qnj64" [8b62726d-df58-44c9-b502-0c5d67355098] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qnj64" [8b62726d-df58-44c9-b502-0c5d67355098] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004436202s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-975631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-975631 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (21.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-975631 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Non-zero exit: kubectl --context bridge-975631 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.164059147s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 00:11:30.216007  387894 retry.go:31] will retry after 935.657364ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context bridge-975631 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:175: (dbg) Done: kubectl --context bridge-975631 exec deployment/netcat -- nslookup kubernetes.default: (5.136990902s)
--- PASS: TestNetworkPlugins/group/bridge/DNS (21.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bswkf" [c7dd7e7d-4eb9-4dd2-ab59-06e12acb86a5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004511981s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-bswkf" [c7dd7e7d-4eb9-4dd2-ab59-06e12acb86a5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004109956s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-496518 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-496518 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241023-a345ebe4
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-496518 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518: exit status 2 (291.535606ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518: exit status 2 (292.480761ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-496518 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-496518 -n default-k8s-diff-port-496518
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-975631 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    

Test skip (26/329)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-630093 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:702: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-885812" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-885812
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-975631 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-975631" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Dec 2024 23:59:21 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-211395
contexts:
- context:
cluster: missing-upgrade-211395
extensions:
- extension:
last-update: Wed, 04 Dec 2024 23:59:21 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-211395
name: missing-upgrade-211395
current-context: ""
kind: Config
preferences: {}
users:
- name: missing-upgrade-211395
user:
client-certificate: /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/missing-upgrade-211395/client.crt
client-key: /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/missing-upgrade-211395/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-975631

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-975631"

                                                
                                                
----------------------- debugLogs end: kubenet-975631 [took: 3.498658441s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-975631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-975631
--- SKIP: TestNetworkPlugins/group/kubenet (3.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-975631 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-975631" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 05 Dec 2024 00:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-env-159420
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20045-381016/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 04 Dec 2024 23:59:21 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: cluster_info
server: https://192.168.94.2:8443
name: missing-upgrade-211395
contexts:
- context:
cluster: force-systemd-env-159420
extensions:
- extension:
last-update: Thu, 05 Dec 2024 00:00:19 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-env-159420
name: force-systemd-env-159420
- context:
cluster: missing-upgrade-211395
extensions:
- extension:
last-update: Wed, 04 Dec 2024 23:59:21 UTC
provider: minikube.sigs.k8s.io
version: v1.26.0
name: context_info
namespace: default
user: missing-upgrade-211395
name: missing-upgrade-211395
current-context: force-systemd-env-159420
kind: Config
preferences: {}
users:
- name: force-systemd-env-159420
user:
client-certificate: /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/force-systemd-env-159420/client.crt
client-key: /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/force-systemd-env-159420/client.key
- name: missing-upgrade-211395
user:
client-certificate: /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/missing-upgrade-211395/client.crt
client-key: /home/jenkins/minikube-integration/20045-381016/.minikube/profiles/missing-upgrade-211395/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-975631

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-975631" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-975631"

                                                
                                                
----------------------- debugLogs end: cilium-975631 [took: 4.632642143s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-975631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-975631
--- SKIP: TestNetworkPlugins/group/cilium (4.81s)

                                                
                                    
Copied to clipboard